Same Prompt, Four AIs — Why the Answers Aren’t the Same The differences aren’t just in the answers—they’re in the thinking Generative AI tools are often discussed as if they were interchangeable—different interfaces delivering broadly similar outputs. However, when applied to complex intellectual tasks, meaningful differences begin to emerge. To explore this, I ran the same academically rigorous prompt through four leading systems—Claude, ChatGPT, Google Gemini, and Copilot. The task required a full thematic analysis of a researcher’s career using the framework developed by Virginia Braun and Victoria Clarke . What followed was not simply variation in output, but variation in how each system approached the act of analysis itself. Same Input, Different Interpretations At a high level, the experiment is simple: One prompt → Four models → Four distinct approaches What changes is not the instruction, but how each system: Interprets the task Handles uncertainty Applies methodology Defines ...
Analysing Documents with AI: A Multi-Stage Prompting Approach What happens when a data scientist and a statistician are asked to challenge each other's reading of the same paper? The coding-focused prompting technique described in a previous post has a natural sibling: the same multi-stage, dual-persona approach works remarkably well for document analysis. Instead of building software through iterative expert review, you are analysing a piece of work — a research paper, a dataset report, a literature review — and subjecting it to exactly the same kind of structured, adversarial scrutiny. This post walks through how that adapted prompt works, why the underlying techniques make it more than a glorified summarisation tool, and what happened when it was tested on a social network analysis of co-authorship patterns in an academic repository. Why Not Just Ask for a Summary? A single-shot summary prompt is fine if you want a précis. But analysis is different. Analysis requires aski...