Building Software with AI: A Multi-Stage Prompting Approach
What if you could have two seasoned 'experts' sitting beside you as you built software (you can add more if you like) — questioning your assumptions, challenging your logic, and pushing your code toward something genuinely better? That's exactly what this prompting approach tries to recreate.
This post walks through a structured, multi-stage prompt technique for AI-assisted software development. It combines three well-established prompt strategies — iterative questioning, persona simulation, and collaborative reasoning — into a single, coherent workflow. Whether you're new to prompt engineering or looking to level up your practice, this approach offers a repeatable pattern worth trying.
Why This Approach?
It's tempting to use AI for code in a single shot: describe the problem, get some code, move on. The results are often functional, but sometimes shallow — the AI has no real understanding of your context, your constraints, or your users - that is your knowledge.
This technique slows things down deliberately. It front-loads the thinking, brings in multiple perspectives, and creates a feedback loop that mirrors how good software is actually built — through conversation, critique, and iteration.
Stage 1: Gather Information
Before a single line of code is written, the AI takes on the role of a thoughtful interviewer. Rather than asking everything at once (which can feel overwhelming), it asks one question at a time, building up a rich picture of what needs to be built.
This is more powerful than it sounds. Single-question prompting encourages you to think carefully about each answer, and it gives the AI the chance to tailor each follow-up based on what you've said. The process continues until you feel the AI has enough context — at which point you type "stop it" and the initial code is generated.
The output here isn't polished production code. It's a first draft — a starting point for the real work.
Stage 2: Define Your Expert Personas
Here's where things get interesting. Before any review begins, you define two personas — fictional 'experts' whose job is to scrutinise the code. These could be a senior developer, a product manager, an end user, a security specialist — whoever would have the most valuable and contrasting perspective on your specific project. In my trial a software engineer and a database specialist.
Both personas share one important trait: they like to play devil's advocate. But crucially, they do so constructively. They're not there to tear things down — they're there to ask the uncomfortable questions that lead to better outcomes.
Defining your own personas is a deliberate act of prompt engineering. You're essentially programming the lens through which your code will be evaluated.
Stage 3: Expert Review — Where the Magic Happens
This is the heart of the approach, and it draws on two powerful prompt strategies working in tandem.
Tree of Thoughts (ToT) has each expert independently reason through the code, explore different angles, and eventually converge on shared insights. You see not just what they think, but how they got there — and where they agree or disagree.
Chain of Thought (CoT) ensures that reasoning is made explicit. Rather than jumping to conclusions, each expert shows their working. This transparency is what makes the feedback genuinely useful rather than superficially clever.
The review unfolds iteratively — one insight at a time. Each round surfaces observations, suggested improvements, points of agreement or disagreement, and a shared refinement. Then a single, focused question is posed to you, the developer. Your answer shapes the next round.
At any point, you can push back. If the experts have misunderstood something or headed in the wrong direction, you say so. The process adapts. This keeps you in control while still benefiting from the challenge.
When you're satisfied — or simply ready to move on — type "stop please" and the AI produces revised code that reflects everything discussed.
Taking It Further
Once your code has been through this process, there are natural next steps. If you're building a web application, prompting for an OpenAPI specification is a logical extension — turning your reviewed code into documented, shareable contracts for your API. From there, generating unit and integration tests based on the final codebase closes the loop, giving you something closer to production-ready output than a single-shot prompt ever could.
The Bigger Idea
What this approach really demonstrates is that prompt engineering is architecture. The way you structure a conversation with an AI shapes the quality of what comes out the other end. By building in stages, defining perspectives, and creating space for iterative challenge, you're not just writing prompts — you're designing a thinking process.
Try it on your next project. You might be surprised what your 'experts' have to say. The complete prompt is shown below.
Complete Prompt
A new piece of software needs to
be built. You will ask questions, one question at a time, about the software
until I say “stop it” then generate initial code
You will then prompt the user to
provide details of persona1
You will then prompt the user to
provide details of persona2
Both personas like to play devil's advocate but phrase their ideas in a constructive way.
You will acts as Persona1 and
Persona2, and following the approach in STAGE 3 provide iterative feedback, one
insight at a time, but with the goal of making the code better. You will ask
questions until “stop please” is typed in, then provide the revised code based
on the discussion.
-------------------------
STAGE 4: EXPERT REVIEW
-------------------------
Simulate Persona1, Persona2 review
For each stage of review:
- Provide each expert’s
observations
- Suggested improvements
- Points of
agreement/disagreement
- A shared refinement
No comments:
Post a Comment