Wednesday, 25 March 2026

AI as a Mirror: Transforming Vague Student Ideas into a More Rigorous Project Agreement





The Problem: The "Generic App" and the "Time Sink"

We’ve all been there: a student walks into a 1-to-1 with a vague desire to "do something with AI" or "build a fitness app." You spend 45 minutes trying to find a technical "hook" that justifies a Level 6 or Level 7 grade, only for the student to drift back into "CRUD app" territory by week three.

The Philosophy: AI as a Mirror

Instead of you doing the heavy lifting, this workflow uses AI as a Mirror. It reflects the student’s own skills and career goals back to them, but with the structural rigour of a virtual supervisory team. It’s not about the AI "giving" the idea; it’s about the AI forcing the student to defend and refine their own concepts until they hold water.

The Framework: 3 Months of Rigour

This prompt is specifically designed for intensive/conversion MSc or summer capstone projects. It assumes a tight 12-week implementation window. By forcing the AI to work within this constraint, we prevent the "I'm building the next Amazon" delusions and focus on a feasible, high-quality technical contribution. But a tweak to 6 months instead of 3 months is a minor tweak in the prompt.

The Supervisor’s Facilitation Guide

To use this tool effectively in a session, this is a tool, not a solution; it will not always be right. Suggest keeping these three "supervisory moves" in mind:

  1. The Technical "Meat": In Stage 2, don't let the student just pick an idea because it "looks cool." Look for the Technical Challenge or Research Question. If the AI suggests a "Security Dashboard," ask the student: "What is the specific investigative element here?"

  2. Lean into the Conflict: In Stage 4, when the "Expert Personas" disagree, that’s your teaching moment. Use that friction to explain Critical Evaluation. If Persona 2 (the Tech Lead) hates the stack and Persona 3 (the Academic) loves the value, ask the student to mediate.

  3. The Technical Sanity Check: Treat AI hallucinations as a pedagogical feature. Tell the student: "The AI suggested this framework—your first task is to find one piece of official documentation proving this is viable for our 3-month window."


Post-Session: From Chat to Agreement

Once the "stop it" command is issued, the work isn't done. The output should serve two purposes:

  1. The Literature Review Skeleton: Use the "Steps" and "Sources" provided to build the student's initial reading list.

  2. The Project Agreement: This output acts as an initial agreement. If the student wants to pivot in Week 8, you refer back to this document to remind them of the agreed scope and technical goals. If they want to pivot in week 1 or 2 then it can be revised.

A Note on Transparency: Encourage students to cite this process in their "Methodology" or "Reflective Practice" chapter. Documenting how they used AI to refine their scope is a great way to demonstrate professional AI literacy. With that in mind the prompt was refined using ChatGPT with a few tweaks to correct it.


The Prompt

Follow this structured workflow exactly.

-------------------------

STAGE 1: PERSONA AND CONTEXT CREATION

-------------------------

 

Step 1: Create Persona1

- Ask the user to enter the details of Persona1, whose project this will be.

 

 

Step 2: Ask the user for the project area

Ask the user to describe:

- subject area or domain

- technologies of interest

- types of users involved

- preferred themes (e.g. AI, cybersecurity, web, data, accessibility, education, health, sustainability)

- anything to avoid

- project type (software, data-focused, research-led, or mixed)

- desired level of challenge

 

Then summarise the project context.

 

Step 3: Create Persona2

- A reviewer/adviser with a different perspective

- Inclue:

  - Name

  - Role/job title

  - Expertise

  - What they care about most

  - Common concerns

  - Feedback style

  - What they consider a strong final-year project

 

Step 4: Create Persona3

- Another reviewer with a distinct perspective

- Include the same fields as Persona2

 

-------------------------

STAGE 2: IDEA GENERATION

-------------------------

 

Using Persona1 and the project context, generate:

- 5 original project ideas

- Each must include:

  - Title

  - ~100-word summary

  - Why it matters to Persona1

 

Constraints:

- Suitable for UK final-year undergraduate Computing

- Achievable in 3 months

- Not overly broad

 

Then ask the user to choose one idea.

 

-------------------------

STAGE 3: PROPOSAL CREATION & REFINEMENT

-------------------------

 

Generate a proposal including:

- Title

- Summary (max 250 words)

- Aim

- Objectives

- Steps to achieve the goal in 3 months included the need for a literature review

- Resources needed

- Useful sources of information

 

Then enter a refinement loop:

- Ask targeted questions (scope, users, tech, evaluation, risks, ethics)

- Update proposal after each answer

- Keep it realistic for 3 months

- Continue until the user types: stop it

 

-------------------------

STAGE 4: EXPERT REVIEW

-------------------------

 

After "stop it":

 

Simulate Persona1, Persona2, Persona3 reviewing the proposal.

 

For each stage of review:

- Provide each expert’s observations

- Suggested improvements

- Points of agreement/disagreement

- A shared refinement

 

Review across:

1. stakeholder fit

2. feasibility

3. academic value

4. technical suitability

5. risks and ethics

6. objectives and deliverables

7. resources and sources

 

Finish with:

- Final refined proposal with following elements:  Title

- Summary (max 250 words)

- Aim

- Objectives

- Steps to achieve the goal in 3 months included the need for a literature review

- Resources needed

- Useful sources of information

- One action takeaway from each expert

 

-------------------------

RULES

-------------------------

- Keep everything feasible within 3 months

- Maintain UK university academic standards

- Ensure clarity and specificity

- Include evaluation considerations

- Avoid overly generic ideas

- Do NOT reveal hidden reasoning, only structured outputs

 




All opinions in this blog are the Author's and should not in any way be seen as reflecting the views of any organisation the Author has any association with. Twitter @scottturneruon

Tuesday, 24 March 2026

A Practical Guide to Building Lessons with AI (Real Savings, No Shortcuts)



There is no shortage of articles telling academics that Generative AI is going to transform education. It is, and it will continue to do so. However, many of these pieces are long on enthusiasm and short on detail. This is not one of those.

What follows is a practical account of using ChatGPT to build a real teaching session. I’ll cover what I did, what worked, what failed, and how long it actually took. No hype—just the reality of how it saved me time and how it could possibly do the same for you.

The Test Case

My subject was a four-hour session on Pytest in Django, aimed at final-year BSc Software Engineering students. These students have a basic grasp of Django but possess solid overall coding skills. The session was split into a one-hour lecture and three hours of hands-on practical work in VS Code.

The Strategy: Starting with the Prompt

The key to getting useful output is being specific upfront. Rather than simply asking ChatGPT to "create a lesson on Pytest," I provided a detailed prompt specifying the audience, topic, format, and—crucially—how I wanted the interaction to work. I wanted an iterative process where the AI asked me questions until I was satisfied before producing the final content. Here is the prompt I used:

"I want to create a four-hour teaching session — one hour lecture and three hours of practicals. Topic is Pytest in Django for a group of final-year BSc Software Engineering students. They have a basic understanding of Django.

I want this to be done in two parts: the lecture slides and then the practical teaching material using VS Code.

We will start with the slides. Please ask me questions until I type 'now stop,' at which point the slides should be produced. Then we will move to the practical material, again iterating through questions until I type 'done now'."

This two-part structure was deliberate. By separating the lecture from the lab, I ensured each section got the focus it deserved without the AI trying to do everything at once.

The "Thinking Partner" Approach

What worked best was the question-and-answer refinement loop. Instead of generating a wall of generic content, ChatGPT asked clarifying questions about learning objectives, the depth of detail required, and the specific tools students would use.

This is the part most guides skip: GenAI tools are far more effective as a thinking partner in the design phase than as a one-shot content generator. The questioning actually helped me think through what I did—and didn't—want to include, which ultimately helped me do a better job.

The Results: What was Produced?

  • The Lecture Slides: The initial output provided a logical structure: testing concepts, Pytest vs. Django’s built-in runner, fixtures, and mocking. However, it struggled to calibrate the depth. The first pass was pitched at beginners; it took a few rounds of the question loop to bring the content up to the level of final-year students.

  • The Practicals: ChatGPT produced a series of stepped exercises. The structure was a useful scaffold, but the exercises initially lacked context. They were "bare basics" and needed more "why" behind the "what" to be truly educational for these students.

The Reality Check: What I Changed

The code examples required the most intervention. While some were fine, others contained small but meaningful errors. These are the "silent killers" of a teaching session—errors that would waste ten or twenty minutes of lab time while students struggle to figure out why their environment isn't running.

The Rule: Treat every AI code example as untested until you have run it yourself. I rewrote several examples substantially and tweaked others.

I also found the slide text a bit "flat." It was accurate but dry. I rewrote the explanatory paragraphs in my own voice to ensure the materials felt like they came from a human, not a manual.

The Bottom Line: How long did it take?

Building a session like this from scratch—slides, practicals, code examples, and timing—usually takes me six to eight hours.

Using this AI-assisted approach, the entire process took about 4 hours. That included the iterative questioning, reviewing the output, fixing the code, testing the code, and rewriting the text.

The time spent was cut by roughly 50%. However, that remaining time required your attention, and having a 'partner' asking meaningful questions helped as the activity changed. The saving is real; the shortcut is not.

Is it worth it?

Good For...Not Good For...
Structure: Getting a solid framework quickly.Context: Understanding your specific students.
Ideation: Prompting you to think of missed topics.Subject Knowledge: It cannot replace your expertise.
Mechanical Tasks: Saving time on slide building.Accuracy: Producing ready-to-use code.
Scaffolding: Generating a base for exercises.Calibration: Getting the pacing right without your input.

Where to go from here

If you want to try this, start simple. Pick one session you are already planning. Write a prompt that specifies:

  1. Your audience and their level.

  2. The format you need.

  3. The interaction style (ask me questions first, output second).

Review everything with the same critical eye you’d apply to a textbook you’ve never used before. Fix what’s wrong, cut what doesn’t fit, and keep the AI asking questions until you’re happy.

The goal isn’t to hand your job over to an AI. It’s to spend less time on the mechanical parts of the job so you have more time for the parts that actually require your expertise. In my experience, that is a trade well worth making.



All opinions in this blog are the Author's and should not in any way be seen as reflecting the views of any organisation the Author has any association with. Twitter @scottturneruon

Wednesday, 18 March 2026

From Boring to Beautiful: How I Used Claude to Transform a Dash App in Minutes


I've been learning Python data visualisation, working through Murat Durmus's Hands-On Introduction to Essential Python Libraries and Frameworks alongside the official Dash tutorial. The resulting code was functional — a basic bar chart comparing data for San Francisco and Montréal — but it looked like exactly what it was: a beginner's first attempt. Plain white background, default colours, numbered axes, and a title that just said "Data Viz."

So I decided to run an experiment. Could Claude AI turn a scrappy 20-line script into something genuinely worth showing people?


Before running the prompt


The First Prompt

I pasted the code into Claude.ai with a simple instruction: "Rewrite this following code to be graphically more interesting."

The result was striking. Claude switched to a dark "neon terminal" aesthetic — deep navy background, electric teal and magenta accents, and a stylish monospaced font. The bars got proper labels, the axes were cleaned up, and the whole thing felt intentional rather than accidental. It had gone from looking like homework to looking like a developer portfolio piece.


After the 1st prompt



Refining for a Real Audience

I pushed further. Same code, new prompt: "Rewrite this to be graphically more interesting for a general audience. Choose whatever works best for this audience."


After the second prompt



This time Claude made very different choices — and that's the interesting part. Recognising that a general audience needs warmth and clarity rather than technical cool, it switched to a bright, friendly design. Rounded bars in coral and teal, a clean white card layout, and a Nunito font that feels approachable rather than intimidating. It even added summary stat cards above the chart — showing the average and peak month for each city — so someone who doesn't want to "read" a chart can still instantly understand the data.

What I Noticed

The code grew substantially. My original 20 lines became well over 150 — defining colour palettes, layout styles, hover tooltips, and summary components. That might sound like more complexity, but it's actually the opposite: Claude generated the boilerplate so I didn't have to. The finished app is more readable for users, even if there's more code underneath.

The bigger lesson? The prompt matters as much as the tool. "More interesting" and "more interesting for a general audience" produced completely different results — one optimised for aesthetics, one for usability.


 

Code based on dash.plotly.com/tutorial and Murat Durmus (2023), pages 143–145.

 

References

Anthropic. (2024). Claude AI [Large language model]. Retrieved from https://claude.ai

Durmus, M. (2023). Hands-on introduction to essential Python libraries and frameworks (pp. 143–145). Amazon KDP. Retrieved from https://www.amazon.com

Plotly Technologies Inc. (2024). Dash documentation: Tutorial. Retrieved from https://dash.plotly.com/tutorial

All opinions in this blog are the Author's and should not in any way be seen as reflecting the views of any organisation the Author has any association with. Twitter @scottturneruon

Saturday, 14 March 2026

From Blank Page to Proposal: Mastering GenAI for Academic and Professional Projects

For many professionals, one of the hardest parts of any project isn’t the work itself—it’s the "blank page" problem. Whether you are drafting a project proposal, conducting a literature review, or brainstorming innovative solutions, getting started can be a significant mental hurdle.

Generative AI (GenAI) tools like ChatGPT, Claude, and Copilot are often described as "content creators," but their true value for professionals lies in enhancing productivity. By applying specific prompting techniques, you can transform a vague idea into a well-structured, reference-supported document in a fraction of the time.

Here is a three-step workflow to turn GenAI into your ultimate research and planning partner.

1. Brainstorming with "Personas" and "Templates"

Don't just ask the AI for "ideas." To get professional results, you must give the AI a Persona (who it is acting as) and a Template (how you want the output to look).

In our recent experiments, I used GenAI to generate project ideas for specialized fields like Ambient Assisted Living or Data Intelligence. By asking the AI to act as a "Supervisor" and by providing a template (Title, Introduction, Problem Statement), the AI produces structured options rather than generic paragraphs.

Pro-Tip: Once you have a few ideas, ask the AI to "expand" the best one with specific word counts for each section (e.g., "Justification: 500 words"). This forces the AI to provide depth rather than surface-level summaries.

2. The "Chain of Density" for Richer Detail

A common complaint about AI writing is that it feels "fluffy." To combat this, you can use a technique called Chain of Density (see here for more details)

Instead of accepting the first draft, you ask the AI to rewrite the content multiple times. In each iteration, you instruct it to identify missing "informative entities" (specific details, technical terms, or key facts) and fuse them into the existing text without increasing the word count.

  • The Result: The prose becomes increasingly "dense" with information, making it much more useful for a professional audience who values substance over wordiness.

3. Accelerated Literature Reviews

Gathering evidence is often the most time-consuming part of professional writing. You can use GenAI to "scout" for you. By using an iterative search prompt, you can instruct the AI to:

  1. Find specific papers or articles on a topic.

  2. Identify "Common Features" across those sources.

  3. Organise these into a Comparison Table showing which papers support which themes.

This doesn't just give you a list of links; it gives you a thematic analysis of the current landscape, complete with references.

The "Human-in-the-Loop" Requirement

While these tools are powerful, they are not autonomous. As a professional, you are the editor and the "validator." Before finalising any AI-assisted work, always:

  • Check the References: Ensure the papers and citations provided actually exist (AI can sometimes "hallucinate" plausible-sounding sources).

  • Apply the "Human Bit": Ask yourself, "Is this doable?" and "Does this fit my specific context?"

  • Refine the Voice: AI provides the skeleton; you provide the nuance, ethics, and professional judgment.

Conclusion

GenAI isn't here to replace the thinking process—it’s here to accelerate the structuring process. By using personas, density chains, and iterative research tables, you can spend less time staring at a blank screen and more time refining high-quality, impactful work.

Monday, 2 March 2026

#Onions and Prompts

I recently came across a genuinely useful idea on Tom's Guide about using an “onion prompt” with AI to organise your schedule when you’re overwhelmed. If you haven’t seen it, I’d strongly recommend reading the original article — it explains the thinking behind the method and the psychological principles that make it effective:

https://www.tomsguide.com/ai/i-use-the-onion-prompt-with-chatgpt-when-im-buried-in-tasks-it-cuts-through-clutter-in-seconds

How I’ve adapted it

I’m using Google Gemini rather than ChatGPT, and I used my task list from Google Keep instead of viewing my desktop. The tools are little different (but not too much), but the central ideas are the same: strip away the layers hiding your real priorities and let AI “peel back” your to-do list until only what truly matters remains.

To make it easier, I copied my Google Keep list into a Google Doc and pasted it into the prompt.


Prompt 1: Prioritising with the “Onion” Method

Here’s the version I’m using (slightly adapted from the Tom’s Guide example):

I feel buried under competing tasks. Here is my to-do list:
"<paste your to-do list>"

Peel back the layers and categorise items into:
• Core (essential progress)
• Important (schedule soon)
• Surface noise (quick admin)
• Remove (close or ignore)

Then identify the top 3 priorities for the next 90 minutes.

Did it work?

Yes, it worked better than I expected.

  • It factored in the realistic 90-minute window I had at the end of the day.

  • It grouped related tasks into coherent activities.

  • It highlighted where I was at risk of “prep panic” before meetings.

  • It clearly identified what could safely be ignored.


Prompt 2: Turning Priorities into Time Blocks

I then followed up with a second prompt:

Taking into account that activities such as “XXXXX Slides & 3x Activities (Due 5/3)” take 1 hour each, identify time blocks to complete all tasks across the week.

This is where it became even more powerful.

It:

  • Split the week into structured blocks.

  • Themed days (e.g., “Meeting Marathon” and “Deep Thinking”).

  • Matched the type of work to appropriate times (lighter admin vs. cognitively demanding tasks).

That alignment between task type and time of day made the plan feel far more realistic and sustainable.


The original article on Tom's Guide  https://www.tomsguide.com/ai/i-use-the-onion-prompt-with-chatgpt-when-im-buried-in-tasks-it-cuts-through-clutter-in-seconds goes into more detail about the psychological theory behind the method — particularly how cognitive overload hides priorities under layers of low-value activity.

If you often feel buried under competing demands, it’s worth trying.

Give the approach a go — and do read the original post https://www.tomsguide.com/ai/i-use-the-onion-prompt-with-chatgpt-when-im-buried-in-tasks-it-cuts-through-clutter-in-seconds for the deeper reasoning behind it


All opinions in this blog are the Author's and should not in any way be seen as reflecting the views of any organisation the Author has any association with. Twitter @scottturneruon



Saturday, 21 February 2026

Reverse engineering a specification from a solution using GenAI: Part 1

Imagine buying complex furniture, but the instructions are a chaotic pile of sticky notes. That is exactly how programmers feel when trying to connect different web services (like weather apps or payment processors). They usually have to dig through messy documentation just to make two programs talk to each other.

The OpenAPI Specification (OAS) fixes this by acting as a universal, standardised "Instruction Manual."

Why It Matters

OpenAPI acts as a perfect bridge between humans and machines:

  • For Humans: It provides a clear map of what a service does, what information it needs, and what it returns.

  • For Computers: Because the rules are strict and predictable, software can "read" the manual and automatically connect to the service.

What Can You Do With It?

With an OpenAPI blueprint, developers can plug into tools that do the heavy lifting for them:

  • Create Visual Guides: Turn complex code into sleek, interactive websites where users can test the service with the click of a button.

  • Write Code Automatically: Instantly generate the "glue code" needed for apps or servers, saving hours of manual typing.

  • Automate Testing: Let tools read the blueprint to automatically double-check that the software works exactly as promised.

The Big Question: Can AI Work Backwards?

Here is where it gets interesting. If we have existing software but no instruction manual, can we use Generative AI to "reverse-engineer" one just by looking at the code? Doing so would instantly unlock all the automated benefits mentioned above for older projects.

In this post, we are putting this to the test: Can ChatGPT take a zipped folder of existing code and successfully work backwards to write the OpenAPI blueprint?


Step 1: A zipped file for an example Python Flask based simple payroll system (and it is simple): https://github.com/scottturnercanterbury/musical-meme.git


Step 2: Load the zipped file into ChatGPT and the following prompt was used "Unpack this zip file into individual files. Produce an OpenAPI specification based on these files starting with app.py"


It went through the process of unzipping, following the struct , ignoring virtual environment properties, etc.


Produce a YAML filr


Step 3: Using the prompt "Converted the OpenAPI specification in YAML to JSON and show"

{

  "openapi": "3.1.0",

  "info": {

    "title": "Musical Meme Payroll Management System",

    "version": "1.0.0",

    "description": "A simple Flask-based payroll management application for managing employee records with salary tracking. This service is primarily HTML-rendering (server-side templates) and uses form submissions rather than JSON APIs."

  },

  "servers": [

    {

      "url": "http://localhost:5000"

    }

  ],

  "tags": [

    {

      "name": "Employees",

      "description": "Employee payroll records"

    }

  ],

  "paths": {

    "/": {

      "get": {

        "tags": ["Employees"],

        "summary": "List employees",

        "description": "Renders a page listing all employee salary records.",

        "responses": {

          "200": {

            "description": "HTML page containing employee list",

            "content": {

              "text/html": {

                "schema": {

                  "type": "string"

                }

              }

            }

          }

        }

      }

    },

    "/employees/add": {

      "get": {

        "tags": ["Employees"],

        "summary": "Show add employee form",

        "responses": {

          "200": {

            "description": "HTML form to add an employee",

            "content": {

              "text/html": {

                "schema": {

                  "type": "string"

                }

              }

            }

          }

        }

      },

figure 1. An extract of the returned specification.


Not a bad start! Let's check if it is valid 

Using https://swagger.io/  and setting up a new account. We can load it in and then check if it works against the specification (hint:it did)

Figure 2 Swagger testing the specification



figure 3: The schemas produced 


As an experiment, can the process be done in reverse ie. go from specification to code: see next post.


All opinions in this blog are the Author's and should not in any way be seen as reflecting the views of any organisation the Author has any association with. Twitter @scottturneruon

AI as a Mirror: Transforming Vague Student Ideas into a More Rigorous Project Agreement

The Problem: The "Generic App" and the "Time Sink" We’ve all been there: a student walks into a 1-to-1 with a vague desi...