Skip to main content

Posts

Same Prompt, Four AIs — Why the Answers Aren’t the Same

Same Prompt, Four AIs — Why the Answers Aren’t the Same The differences aren’t just in the answers—they’re in the thinking Generative AI tools are often discussed as if they were interchangeable—different interfaces delivering broadly similar outputs. However, when applied to complex intellectual tasks, meaningful differences begin to emerge. To explore this, I ran the same academically rigorous prompt through four leading systems—Claude, ChatGPT, Google Gemini, and Copilot. The task required a full thematic analysis of a researcher’s career using the framework developed by Virginia Braun and Victoria Clarke . What followed was not simply variation in output, but variation in how each system approached the act of analysis itself. Same Input, Different Interpretations At a high level, the experiment is simple: One prompt → Four models → Four distinct approaches What changes is not the instruction, but how each system: Interprets the task Handles uncertainty Applies methodology Defines ...
Recent posts

Analysing Documents with AI: A Multi-Stage Prompting Approach

Analysing Documents with AI: A Multi-Stage Prompting Approach What happens when a data scientist and a statistician are asked to challenge each other's reading of the same paper? The coding-focused prompting technique described in  a previous post  has a natural sibling: the same multi-stage, dual-persona approach works remarkably well for document analysis. Instead of building software through iterative expert review, you are analysing a piece of work — a research paper, a dataset report, a literature review — and subjecting it to exactly the same kind of structured, adversarial scrutiny. This post walks through how that adapted prompt works, why the underlying techniques make it more than a glorified summarisation tool, and what happened when it was tested on a social network analysis of co-authorship patterns in an academic repository. Why Not Just Ask for a Summary? A single-shot summary prompt is fine if you want a précis. But analysis is different. Analysis requires aski...

Quick and dirty Vibe Coding tool

Building Software with AI: A Multi-Stage Prompting Approach What if you could have two seasoned 'experts' sitting beside you as you built software (you can add more if you like) — questioning your assumptions, challenging your logic, and pushing your code toward something genuinely better? That's exactly what this prompting approach tries to recreate. This post walks through a structured, multi-stage prompt technique for AI-assisted software development. It combines three well-established prompt strategies — iterative questioning, persona simulation, and collaborative reasoning — into a single, coherent workflow. Whether you're new to prompt engineering or looking to level up your practice, this approach offers a repeatable pattern worth trying. Why This Approach? It's tempting to use AI for code in a single shot: describe the problem, get some code, move on. The results are often functional, but sometimes shallow — the AI has no real understanding of your cont...

GenaI as co-author and more importantly as "Devil's Advocate"

In a companion post on context stacking, I came across an idea that stayed with me — and I wanted to explore it further. This piece isn’t just about the final blog produced, but about the process behind creating it using generative AI (specifically, Claude.ai). Rather than using AI as a writing shortcut, I used it as a thinking partner — one that could challenge my assumptions, test my reasoning, and help strengthen the argument before anything was finalised. What emerged was a structured workflow (shown below) that others can adopt when using AI to improve the rigour, not just their output. And it all starts with setting the context and the audience and telling the generative AI to pick it apart. The process: Before using Generative AI: Two drafts were produced, and the second draft went through this final process, as described by claude.ai here. Here's the workflow we followed: 1. Critical Reading and Initial Diagnosis We started with a close reading of your original draft, ...

AI, the Flipped Classroom and a Possible Future of the Lecture

A proof of concept argument for student-centred module leaders A tweet recently caught my attention  https://x.com/ihtesham2005/status/2041576806810370553?s=20 . It described an MIT student who had developed what he called “context stacking” — uploading lecture materials, readings and related papers into an AI tool before each class, then using carefully constructed prompts to build a mental model of the content before setting foot in the lecture hall. By the time he arrived, the professor wasn’t teaching him anything new. They were confirming, refining and occasionally surprising him. That surprise, he said, was the only thing he wrote down. This is not simply pre-reading with extra steps. Using generative AI as an external thinking partner, this student was identifying gaps in his own understanding before the lecture began — doing what good tutors have always done, asking not “what do you know?” but “where does your understanding break down?” This maps directly onto the highe...

Starting a Literature Review with GenAI: A Supervisor’s Secret Weapon

If you supervise research students at undergraduate or postgraduate level, you are likely to be very familiar with the "blank stare"—that moment a student first confronts the sheer, overwhelming mountain of academic literature they are expected to read, synthesise, and critique. Information overload is a genuine academic pain point, often manifesting as a severe case of "blank page" syndrome. As academics, we know that starting is half the battle . This is where Generative AI, like ChatGPT-4o, shines not as a tool to write the review for the student, but as a structural scaffold. Much like using AI as a mirror to transform vague student ideas , we can use GenAI to help students map the thematic landscape of a topic before they dive into deep reading. It breaks the ice, organises chaos into a digestible format, and gives them a structured starting point. Here is a practical, step-by-step workflow you can share with your students to help them generate a foundational...

No Code, No Problem: How to Use ChatGPT to Compare Any Two Websites

System Overview (produced using ChatGPT) There's a moment many of us have had: you're looking at a competitor's website, then back at your own, and you just know something's different — but you can't quite put your finger on what. Traditionally, getting a rigorous answer meant hiring a consultant, running expensive user research, or spending hours doing it manually. What if you could get sharp, structured, comparative analysis in under ten minutes — without writing a single line of code? That's exactly what this project set out to prove. It started with a specific problem: comparing course websites to understand how they stacked up against a competitor. The goal wasn't just a surface-level look — it was to understand how real people, with different needs and backgrounds, would actually experience each site. The solution turned out to be a structured ChatGPT workflow built entirely in standard chat, using nothing more than a sequence of carefully designed ...