<?xml version="1.0" encoding="utf-8"?><feed xmlns="http://www.w3.org/2005/Atom" ><generator uri="https://jekyllrb.com/" version="3.10.0">Jekyll</generator><link href="https://vinidel.github.io/vinidelascio.github.io/feed.xml" rel="self" type="application/atom+xml" /><link href="https://vinidel.github.io/vinidelascio.github.io/" rel="alternate" type="text/html" /><updated>2026-02-25T05:47:19+00:00</updated><id>https://vinidel.github.io/vinidelascio.github.io/feed.xml</id><title type="html">Vinicius Delascio</title><subtitle>Software engineer writing about AI-assisted development, engineering workflows, and building better systems.</subtitle><author><name>Vinicius Delascio</name></author><entry><title type="html">How I’m Pairing With Codex/Cursor Using Roles, Gates, and Repeatable Artifacts</title><link href="https://vinidel.github.io/vinidelascio.github.io/pairing-with-ai/" rel="alternate" type="text/html" title="How I’m Pairing With Codex/Cursor Using Roles, Gates, and Repeatable Artifacts" /><published>2026-02-25T00:00:00+00:00</published><updated>2026-02-25T00:00:00+00:00</updated><id>https://vinidel.github.io/vinidelascio.github.io/pairing-with-ai</id><content type="html" xml:base="https://vinidel.github.io/vinidelascio.github.io/pairing-with-ai/"><![CDATA[<figure class="diagram-figure">
  <img src="/vinidelascio.github.io/assets/images/new-workflow-hero-diagram-2026-02-25-053213.svg" alt="AI Dev Cycle" />
</figure>

<h2 id="i-stopped-just-prompting-and-built-a-repeatable-ai-workflow">I Stopped “Just Prompting” and Built a Repeatable AI Workflow</h2>

<p>I’ve been experimenting with AI-assisted development for a while, and I already wrote about the idea of using agents for software work.</p>

<p>What changed recently is that the pattern stopped feeling like a clever setup and started feeling like a real workflow.</p>

<p>I’ve been maturing it inside a real project (<code class="language-plaintext highlighter-rouge">my-menu</code>), and now I’m using <strong>Codex</strong> (and sometimes <strong>Cursor</strong>) to run the same set of agents from <code class="language-plaintext highlighter-rouge">.cursor/rules/</code>. The result is simple:</p>

<p>AI is faster now, but more importantly, it’s becoming <strong>structured, predictable, and repeatable</strong>.</p>

<h2 id="the-core-shift">The Core Shift</h2>

<p>The big change was moving from: <strong>“ask the AI to do a task”</strong></p>

<p>to: <strong>“run a stage in a workflow with a clear role, inputs, outputs, and exit gate”</strong></p>

<p>That sounds small, but it changes everything. Instead of hoping the model does the right thing, I give it a <strong>job</strong> inside a process.</p>

<h2 id="the-pattern-im-using">The Pattern I’m Using</h2>

<p>In this repo, I defined a stage-gated workflow in <code class="language-plaintext highlighter-rouge">workflow/WORKFLOW.md</code> with explicit roles:</p>

<ul>
  <li>Orchestrator (brief/scope)</li>
  <li>Implementer (code)</li>
  <li>Tester (tests from acceptance criteria)</li>
  <li>Refactorer (structure only)</li>
  <li>Hardener (risk sweep)</li>
  <li>Documenter (decisions/gaps/ops notes)</li>
  <li>
    <p>Critic (reviews every stage)</p>

    <p>This is the key idea: <strong>the Critic reviews every stage before the next one starts</strong>. That one rule alone reduces a lot of “AI momentum mistakes” (where something looks good and moves forward too quickly).</p>
  </li>
</ul>

<h2 id="why-this-works-better-than-a-single-super-prompt">Why This Works Better Than a Single “Super Prompt”</h2>

<p>Because each agent has:</p>

<ul>
  <li>a narrow mission</li>
  <li>explicit allowed/not-allowed actions</li>
  <li>a stage exit gate</li>
  <li>
    <p>required outputs</p>

    <p>Example: the Implementer is explicitly told to stay inside the brief and avoid refactors/optimization. The Tester is explicitly told to derive tests from the brief, not the
implementation. The Hardener is explicitly focused on security/perf/observability/resilience.</p>

    <p>That separation makes the outputs more consistent and makes it easier for me to review.</p>
  </li>
</ul>

<h2 id="the-most-important-file-projectmd">The Most Important File: <code class="language-plaintext highlighter-rouge">PROJECT.md</code></h2>

<p>A big maturity improvement was introducing a real project context file (<code class="language-plaintext highlighter-rouge">PROJECT.md</code>) and making <strong>every agent read it first</strong>.</p>

<p>That file became the shared memory for:</p>

<ul>
  <li>product scope</li>
  <li>architecture</li>
  <li>conventions</li>
  <li>locked decisions</li>
  <li>constraints</li>
  <li>glossary</li>
  <li>
    <p>current status</p>

    <p>This solved a recurring problem: agents making “reasonable” decisions that were wrong for <em>this</em> project.</p>

    <p>Now the workflow is reusable, but the project behavior is grounded.</p>
  </li>
</ul>

<h2 id="the-workflow-produces-artifacts-not-just-code">The Workflow Produces Artifacts, Not Just Code</h2>

<p>Another reason this is working: each stage leaves evidence.</p>

<p>The repo now accumulates useful artifacts like:</p>

<ul>
  <li><code class="language-plaintext highlighter-rouge">docs/briefs/*.md</code> (scope and acceptance scenarios)</li>
  <li><code class="language-plaintext highlighter-rouge">docs/critique.md</code> (stage review feedback)</li>
  <li><code class="language-plaintext highlighter-rouge">docs/implementation-notes.md</code> (out-of-scope issues spotted during implementation)</li>
  <li><code class="language-plaintext highlighter-rouge">docs/hardening-notes.md</code> (risks, assumptions, deferred items)</li>
  <li>
    <p>feature delivery docs in <code class="language-plaintext highlighter-rouge">docs/*.md</code></p>

    <p>This creates a lightweight audit trail of how a feature matured, not just the final code diff.</p>

    <p>That matters when working with AI because the real risk is not only bad code, it’s <strong>hidden decisions</strong>.</p>
  </li>
</ul>

<figure>
  <img src="/vinidelascio.github.io/assets/images/new-workflow-full-diagram-2026-02-25-053158.svg" alt="AI Dev Cycle" />
</figure>

<h2 id="what-matured-in-the-last-few-days">What Matured in the Last Few Days</h2>

<p>A few things made this feel much more stable recently:</p>

<h3 id="1-clear-role-boundaries-in-cursorrules">1) Clear role boundaries in <code class="language-plaintext highlighter-rouge">.cursor/rules/</code></h3>

<p>Each agent file is now explicit about mission, constraints, and exit gates.
  This reduced role drift a lot.</p>

<h3 id="2-critic-as-a-mandatory-gate-not-optional-review">2) Critic as a mandatory gate (not optional review)</h3>

<p>The Critic is not “nice to have”. It’s the brake pedal.</p>

<h3 id="3-stage-labels-and-pr-progression">3) Stage labels and PR progression</h3>

<p>The workflow maps stages to PR maturity states, so the process is visible and not just in my head.</p>

<h3 id="4-a-gate-keeper-for-gitpr-hygiene">4) A Gate Keeper for git/PR hygiene</h3>

<p>I added a <code class="language-plaintext highlighter-rouge">gatekeeper</code> rule to enforce commit/push/PR updates in the same staged workflow.
  This is important because process breaks often happen at the VCS/PR layer, not only in coding.</p>

<h3 id="5-reusability-across-tools-codex-and-cursor">5) Reusability across tools (Codex and Cursor)</h3>

<p>The workflow lives in repo files, not in one vendor UI.
  That means I can run the same pattern with different AI tools and keep the process consistent.</p>

<h2 id="real-benefit-im-seeing">Real Benefit I’m Seeing</h2>

<p>The biggest win is not “AI writes more code”.</p>

<p>It’s that I can now pair with the agent in a way that feels closer to working with a junior/mid engineer inside a defined team process:</p>

<ul>
  <li>I set direction</li>
  <li>the agent executes a stage</li>
  <li>the critic reviews</li>
  <li>
    <p>we move forward only when the stage is actually done</p>

    <p>This keeps velocity high without turning the project into chaos.</p>
  </li>
</ul>

<h2 id="what-id-recommend-if-you-want-to-try-this">What I’d Recommend If You Want to Try This</h2>

<p>Start small and make it boring:</p>

<ol>
  <li>Create a <code class="language-plaintext highlighter-rouge">PROJECT.md</code> and force every agent to read it first.</li>
  <li>Split work into stages (brief, implement, test, refactor, harden, document).</li>
  <li>Give each stage a dedicated role with clear “not allowed” rules.</li>
  <li>Add a critic/reviewer step between stages.</li>
  <li>Require written artifacts (briefs, critique, hardening notes), not only code.</li>
  <li>Treat PR state as part of the workflow, not an afterthought.</li>
</ol>

<p>Don’t optimize for the smartest prompt.
  Optimize for a workflow you can run again next week.</p>

<h2 id="final-thought">Final Thought</h2>

<p>AI gets impressive results quickly, but sustainable progress comes from structure.</p>

<p>What I’m building now is less about “autonomous AI” and more about a <strong>reliable collaboration system</strong>: human direction + role-based agents + stage gates + artifacts.</p>

<p>That combination has been working well for me, and it’s becoming my default way to ship features with AI.</p>

<p>Repo-specific details I used</p>

<ul>
  <li>Workflow definition: <code class="language-plaintext highlighter-rouge">workflow/WORKFLOW.md</code></li>
  <li>Shared project context pattern: <code class="language-plaintext highlighter-rouge">PROJECT.md</code></li>
  <li>Agent roles:
    <ul>
      <li><code class="language-plaintext highlighter-rouge">.cursor/rules/orchestrator.mdc</code></li>
      <li><code class="language-plaintext highlighter-rouge">.cursor/rules/implementer.mdc</code></li>
      <li><code class="language-plaintext highlighter-rouge">.cursor/rules/tester.mdc</code></li>
      <li><code class="language-plaintext highlighter-rouge">.cursor/rules/refactorer.mdc</code></li>
      <li><code class="language-plaintext highlighter-rouge">.cursor/rules/hardener.mdc</code></li>
      <li><code class="language-plaintext highlighter-rouge">.cursor/rules/documenter.mdc</code></li>
      <li><code class="language-plaintext highlighter-rouge">.cursor/rules/critic.mdc</code></li>
    </ul>
  </li>
  <li>PR/VCS flow guard: <code class="language-plaintext highlighter-rouge">.cursor/rules/gatekeeper.md</code></li>
</ul>]]></content><author><name>Vinicius Delascio</name></author><category term="engineering" /><category term="ai" /><category term="workflow" /><summary type="html"><![CDATA[I turned my AI coding workflow into a structured, repeatable system using role-based agents, stage gates, and shared project context.]]></summary></entry><entry><title type="html">A Practical AI-Native Dev Cycle (With Real Files and PR Flow)</title><link href="https://vinidel.github.io/vinidelascio.github.io/practical-dev-cycle/" rel="alternate" type="text/html" title="A Practical AI-Native Dev Cycle (With Real Files and PR Flow)" /><published>2026-02-12T00:00:00+00:00</published><updated>2026-02-12T00:00:00+00:00</updated><id>https://vinidel.github.io/vinidelascio.github.io/practical-dev-cycle</id><content type="html" xml:base="https://vinidel.github.io/vinidelascio.github.io/practical-dev-cycle/"><![CDATA[<figure class="diagram-figure">
  <img src="/vinidelascio.github.io/assets/images/cycle-high-level.svg" alt="AI Dev Cycle" />
</figure>

<p>In my previous post, I introduced my Personal Engineering Operating System — a stage-gated workflow for AI-assisted development.</p>

<p>This post is more practical.</p>

<p>Here’s exactly how a feature moves from idea to merged PR in my setup — including:</p>

<ul>
  <li>The markdown files created</li>
  <li>The folder structure</li>
  <li>How AI uses those files</li>
  <li>What changes in each stage</li>
  <li>What the final PR looks like</li>
</ul>

<p>No theory. Just mechanics.</p>

<hr />

<h2 id="the-repository-structure">The Repository Structure</h2>

<p>Here’s the minimal structure that makes this work:</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>docs/
  briefs/
    feature-name.md

  engineering/
    personal-engineering-os-v1.md

.ai/
  WORKFLOW.md

.github/
  pull_request_template.md
</code></pre></div></div>

<p>These files are not just documentation for humans.</p>

<p>They shape AI behaviour.</p>

<hr />

<h2 id="stage-0--feature-brief-creation">Stage 0 — Feature Brief Creation</h2>

<p>When a new feature starts, the first artifact is:</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>docs/briefs/daily-mood-logging.md
</code></pre></div></div>

<p>The brief follows a strict structure:</p>

<h1 id="feature-brief-template-annotated">Feature Brief Template (Annotated)</h1>

<p>This document explains what each section in a Stage 0 Feature Brief is for.</p>

<p>The goal of Stage 0 is clarity, not code.
This document exists to remove ambiguity before implementation begins.</p>

<details class="expandable-template">
<summary><strong>Feature Brief Template (Annotated)</strong> — click to expand</summary>


<figure class="highlight"><pre><code class="language-md" data-lang="md"><span class="gu">## Status</span>

Indicates which stage the feature is currently in.

Example:
Stage 0 — Framing

This creates visibility and prevents premature coding.
<span class="p">
---
</span>
<span class="gu">## Alternative name</span>

Optional.

Used to clarify terminology and avoid naming confusion early.
Helps prevent renaming drift during implementation.
<span class="p">
---
</span>
<span class="gu">## Problem</span>

Purpose:
Define the real issue being solved.

This section should:
<span class="p">-</span> Explain the gap in current behavior
<span class="p">-</span> Clarify why the change is needed
<span class="p">-</span> Avoid describing the solution

If this section is unclear, the feature is not ready.
<span class="p">
---
</span>
<span class="gu">## Goal</span>

Purpose:
Define what success looks like.

This should:
<span class="p">-</span> Be concrete
<span class="p">-</span> Be testable
<span class="p">-</span> Avoid future scope creep

If you cannot measure or verify it, it’s not a goal.
<span class="p">
---
</span>
<span class="gu">## Who</span>

Purpose:
Define affected user segments.

This prevents:
<span class="p">-</span> Hidden edge cases
<span class="p">-</span> Partial implementations
<span class="p">-</span> Missed flows

List all user types impacted by the feature.
<span class="p">
---
</span>
<span class="gu">## What We Capture / Change</span>

Purpose:
Clarify data-level changes.

This section should:
<span class="p">-</span> List new fields
<span class="p">-</span> List updated fields
<span class="p">-</span> Clarify storage implications

This helps Stage 1 and Stage 2 later.
<span class="p">
---
</span>
<span class="gu">## Success Criteria</span>

Purpose:
Define the Stage 0 exit conditions.

This should:
<span class="p">-</span> Be written as checkboxes
<span class="p">-</span> Be verifiable
<span class="p">-</span> Map directly to future tests

If these are vague, implementation will drift.
<span class="p">
---
</span>
<span class="gu">## Non-Goals (Out of Scope)</span>

Purpose:
Prevent scope creep.

This is one of the most important sections.

Explicitly state:
<span class="p">-</span> What is NOT included
<span class="p">-</span> What might be future work
<span class="p">-</span> What is intentionally excluded

If it's not here, it’s allowed to creep in later.
<span class="p">
---
</span>
<span class="gu">## Acceptance Scenarios</span>

Purpose:
Translate goals into behavior.

Split into:

<span class="gu">### Happy Paths</span>
<span class="p">-</span> Primary successful user flows

<span class="gu">### Unhappy Paths</span>
<span class="p">-</span> Validation failures
<span class="p">-</span> API failures
<span class="p">-</span> Edge behaviors
<span class="p">-</span> Retry logic

These become:
<span class="p">-</span> Stage 1 guardrails
<span class="p">-</span> Stage 2 tests

If unhappy paths are missing, refactors will break behavior later.
<span class="p">
---
</span>
<span class="gu">## Edge Cases</span>

Purpose:
Surface unusual but realistic conditions.

Examples:
<span class="p">-</span> Timezones
<span class="p">-</span> Leap years
<span class="p">-</span> Null fields
<span class="p">-</span> Legacy users

This section reduces future surprise.
<span class="p">
---
</span>
<span class="gu">## Approach (Short Rationale)</span>

Purpose:
Outline high-level implementation strategy.

This is not code.

It should:
<span class="p">-</span> Describe DB changes
<span class="p">-</span> Describe routing logic
<span class="p">-</span> Describe flow positioning
<span class="p">-</span> Describe UI intent

This prevents architectural drift in Stage 1.
<span class="p">
---
</span>
<span class="gu">## Decisions (Locked)</span>

Purpose:
Freeze important product/architecture decisions.

Examples:
<span class="p">-</span> Required vs optional
<span class="p">-</span> Editable vs immutable
<span class="p">-</span> Field format decisions
<span class="p">-</span> Explicit flags vs derived state

This prevents re-litigating decisions during implementation.

If a decision changes, Stage 0 must be updated.
<span class="p">
---
</span>
<span class="gu">## Stage 0 Exit Gate</span>

Purpose:
Declare readiness.

Stage 0 is complete when:
<span class="p">
-</span> Problem is clear
<span class="p">-</span> Goals are testable
<span class="p">-</span> Non-goals are defined
<span class="p">-</span> Acceptance scenarios include unhappy paths
<span class="p">-</span> Major decisions are locked
<span class="p">-</span> Approach is coherent</code></pre></figure>


</details>

<p>No production code is written before this file exists.</p>

<hr />

<h2 id="stage-1--implementation">Stage 1 — Implementation</h2>

<p>Now the AI works strictly within the brief.</p>

<p>It:</p>

<ul>
  <li>Implements only what’s in scope</li>
  <li>Handles happy + unhappy paths</li>
  <li>Avoids refactoring or optimization</li>
</ul>

<p>If I notice unrelated issues (for example, another page bug), they are not addressed unless the brief changes.</p>

<p>The PR is opened as <strong>Draft</strong> during this stage.</p>

<p><strong>Label:</strong> <code class="language-plaintext highlighter-rouge">stage-1-impl</code></p>

<hr />

<h2 id="stage-2--tests">Stage 2 — Tests</h2>

<p>Now we lock behaviour.</p>

<p>Tests are generated directly from:</p>

<ul>
  <li>Happy paths</li>
  <li>Unhappy paths</li>
  <li>Edge cases listed in the brief</li>
</ul>

<p>CI must pass.</p>

<p><strong>Label moves to:</strong> <code class="language-plaintext highlighter-rouge">stage-2-tests</code></p>

<p>This stage protects against “refactor regret.”</p>

<hr />

<h2 id="stage-3--refactor">Stage 3 — Refactor</h2>

<p>Now that behaviour is protected:</p>

<ul>
  <li>Improve structure</li>
  <li>Align naming</li>
  <li>Remove duplication</li>
  <li>Tighten types</li>
</ul>

<p>Tests must remain green.</p>

<p><strong>Label:</strong> <code class="language-plaintext highlighter-rouge">stage-3-refactor</code></p>

<p>No behaviour changes are allowed.</p>

<hr />

<h2 id="stage-4--hardening">Stage 4 — Hardening</h2>

<p>This is a risk sweep.</p>

<p>Checklist:</p>

<ul>
  <li>Security concerns?</li>
  <li>Dependency impact?</li>
  <li>Performance issues?</li>
  <li>Logging sufficient?</li>
</ul>

<p>If anything risky is found, it’s either fixed or explicitly documented.</p>

<p><strong>Label:</strong> <code class="language-plaintext highlighter-rouge">stage-4-hardening</code></p>

<hr />

<h2 id="stage-5--pr-packaging">Stage 5 — PR Packaging</h2>

<p>The final state of the PR includes:</p>

<ul>
  <li>Clear summary</li>
  <li>Link to the brief</li>
  <li>Screenshots (if UI)</li>
  <li>Risk section</li>
  <li>Rollback plan</li>
</ul>

<p>The PR template enforces this structure.</p>

<p><strong>Label:</strong> <code class="language-plaintext highlighter-rouge">ready-for-review</code></p>

<p>Now it’s safe to merge.</p>

<hr />

<h2 id="the-pr-lifecycle">The PR Lifecycle</h2>

<p>Here’s how a typical PR progresses:</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>Draft
  → stage-1-impl
  → stage-2-tests
  → stage-3-refactor
  → stage-4-hardening
  → ready-for-review
  → merge
</code></pre></div></div>

<p>One PR.<br />
Multiple maturity states.</p>

<p>Not multiple PRs.</p>

<hr />

<h2 id="what-changed-for-me">What Changed for Me</h2>

<p>This workflow:</p>

<ul>
  <li>Prevents premature optimization</li>
  <li>Reduces scope drift</li>
  <li>Synchronizes AI with human evaluation</li>
  <li>Creates clear checkpoints</li>
  <li>Reduces anxiety</li>
  <li>Improves review quality</li>
</ul>

<p>The key insight is simple:</p>

<p>AI generates quickly.<br />
Integration requires cadence.</p>

<p>The stages create that cadence.</p>

<hr />

<h2 id="final-thought">Final Thought</h2>

<p>If you’re experimenting with AI-assisted development, try this:</p>

<ul>
  <li>Don’t optimize your prompts.</li>
  <li>Optimize your structure.</li>
  <li>Create stage boundaries.</li>
  <li>Define exit gates.</li>
  <li>Use markdown briefs as contracts.</li>
  <li>Let the PR reflect maturity progression.</li>
</ul>

<p>You don’t need to control the AI.</p>

<p>You need to design the environment it operates in.</p>]]></content><author><name>Vinicius Delascio</name></author><category term="engineering" /><category term="ai" /><category term="workflow" /><summary type="html"><![CDATA[A concrete walkthrough of my stage-gated AI workflow — including file structure, markdown briefs, PR progression, and final documentation.]]></summary></entry><entry><title type="html">Building My Personal Engineering Operating System (With AI)</title><link href="https://vinidel.github.io/vinidelascio.github.io/personal-engineering-os/" rel="alternate" type="text/html" title="Building My Personal Engineering Operating System (With AI)" /><published>2026-02-11T00:00:00+00:00</published><updated>2026-02-11T00:00:00+00:00</updated><id>https://vinidel.github.io/vinidelascio.github.io/personal-engineering-os</id><content type="html" xml:base="https://vinidel.github.io/vinidelascio.github.io/personal-engineering-os/"><![CDATA[<p>Over the past few weeks, I’ve been experimenting with AI-assisted development on one of my side projects. At first, the experience felt incredibly fast. Then it started to feel chaotic.</p>

<p>The problem wasn’t the AI — it was the absence of structure.</p>

<p>So I introduced a stage-gated workflow — what I now call my Personal Engineering Operating System — and the difference was immediate.</p>

<h2 id="the-problem-with-raw-ai-speed">The Problem With Raw AI Speed</h2>

<p>AI can generate:</p>

<ul>
  <li>Hundreds of lines of code in seconds</li>
  <li>Refactors instantly</li>
  <li>Entire test suites on command</li>
  <li>Architectural suggestions continuously</li>
</ul>

<p>The issue isn’t capability. It’s integration speed. AI can easily outpace your ability to:</p>

<ul>
  <li>Review</li>
  <li>Reflect</li>
  <li>Evaluate trade-offs</li>
  <li>Protect direction</li>
  <li>Maintain architectural coherence</li>
</ul>

<p>When AI outruns you, you stop designing and start reacting. That’s when anxiety creeps in.</p>

<h2 id="the-solution-stage-gated-development">The Solution: Stage-Gated Development</h2>

<p>Instead of letting AI “run,” I introduced stages.</p>

<p>Not Jira stages. Not enterprise bureaucracy. Cognitive stages.</p>

<p>Each stage represents a different thinking mode.</p>

<h2 id="the-six-stages">The Six Stages</h2>

<h3 id="stage-0--frame-the-work">Stage 0 — Frame the Work</h3>

<p>Before writing production code, I define:</p>

<ul>
  <li>The problem</li>
  <li>Success criteria</li>
  <li>Non-goals</li>
  <li>Happy and unhappy paths</li>
  <li>Risks</li>
</ul>

<p><strong>Output:</strong> a simple Feature Brief in <code class="language-plaintext highlighter-rouge">docs/briefs/</code>.</p>

<p>No code is allowed yet. This prevents scope drift and ambiguity.</p>

<h3 id="stage-1--make-it-work">Stage 1 — Make It Work</h3>

<p>Now the goal is simple: make it work.</p>

<p>Focus only on:</p>

<ul>
  <li>The smallest vertical slice</li>
  <li>Happy path first</li>
  <li>Unhappy paths next</li>
  <li>Basic logging</li>
</ul>

<p>No refactoring. No optimization. No architectural polishing. Just working behaviour.</p>

<h3 id="stage-2--lock-behaviour-with-tests">Stage 2 — Lock Behaviour With Tests</h3>

<p>Once it works, we protect it.</p>

<ul>
  <li>Characterisation tests (capture current behaviour)</li>
  <li>Intent tests (assert desired behaviour)</li>
  <li>CI must pass</li>
</ul>

<p>This stage prevents “refactor regret.”</p>

<h3 id="stage-3--refactor-and-align">Stage 3 — Refactor and Align</h3>

<p>Only after behaviour is protected do we improve structure:</p>

<ul>
  <li>Simplify flows</li>
  <li>Align naming and patterns</li>
  <li>Tighten types</li>
  <li>Reduce duplication</li>
</ul>

<p>Behaviour must remain unchanged.</p>

<h3 id="stage-4--harden-and-de-risk">Stage 4 — Harden and De-Risk</h3>

<p>Now we look for surprises:</p>

<ul>
  <li>Security considerations</li>
  <li>Dependency health</li>
  <li>Performance sanity</li>
  <li>Observability</li>
</ul>

<p>Anything risky must be fixed or explicitly documented.</p>

<h3 id="stage-5--pr-and-rollout">Stage 5 — PR and Rollout</h3>

<p>Finally:</p>

<ul>
  <li>Clear PR summary</li>
  <li>Evidence (tests, screenshots)</li>
  <li>Risks and rollback plan</li>
</ul>

<p>The goal is to make it easy to review and safe to ship.</p>

<h2 id="why-this-worked">Why This Worked</h2>

<p>Three things changed immediately.</p>

<h3 id="1-less-cognitive-load">1. Less Cognitive Load</h3>

<p>I stopped thinking about everything at once. Each stage has a single focus. There’s no mixing of modes.</p>

<h3 id="2-less-scope-drift">2. Less Scope Drift</h3>

<p>For example, I noticed a small unrelated issue during implementation. Previously, I would have fixed it immediately.</p>

<p>Now, Stage 0 defines scope. If it’s not in the brief, it waits.</p>

<p>That alone reduced distraction significantly.</p>

<h3 id="3-less-anxiety">3. Less Anxiety</h3>

<p>Anxiety often comes from:</p>

<ul>
  <li>Undefined “done”</li>
  <li>Too many simultaneous concerns</li>
  <li>AI moving faster than you</li>
</ul>

<p>Stages created checkpoints. Exit gates created clarity. I always knew:</p>

<ul>
  <li>Where we were</li>
  <li>What “done” meant</li>
  <li>What the next micro-step was</li>
</ul>

<p>Flow became easier.</p>

<h2 id="the-most-important-shift">The Most Important Shift</h2>

<p>I didn’t slow the AI down — I synchronized with it.</p>

<p>AI generates. I evaluate. We advance a stage. Repeat.</p>

<p>The structure ensures we follow my path, not the AI’s path.</p>

<h2 id="the-core-principle">The Core Principle</h2>

<p>AI accelerates execution.<br />
Structure preserves direction.</p>

<p>Raw AI speed feels productive. Structured AI collaboration feels sustainable.</p>

<p>I now optimize for sustainable throughput, not burst velocity.</p>

<h2 id="the-meta-lesson">The Meta-Lesson</h2>

<p>The real leverage isn’t better prompts. It’s designing environments where AI behaves predictably.</p>

<p>Once I added:</p>

<ul>
  <li>A stage-based workflow</li>
  <li>Clear exit gates</li>
  <li>Structured briefs</li>
  <li>PR enforcement</li>
</ul>

<p>The AI stopped feeling chaotic and started behaving like a disciplined teammate.</p>

<h2 id="final-thought">Final Thought</h2>

<p>If AI-assisted development feels overwhelming, don’t reduce AI usage.</p>

<p>Introduce structure.<br />
Separate thinking modes.<br />
Define exit gates.<br />
Control cadence.</p>

<p>You don’t need to control the AI.</p>

<p>You need to control the environment it operates in.</p>

<p>If you’re experimenting with AI-native engineering workflows, I’d love to compare notes.</p>]]></content><author><name>Vinicius Delascio</name></author><category term="engineering" /><category term="ai" /><summary type="html"><![CDATA[How I introduced stage-gated workflows to tame AI-assisted development — and why structure beats speed.]]></summary></entry></feed>