Friday, 10 April 2026

Starting a Literature Review with GenAI: A Supervisor’s Secret Weapon

Robot in the classroom, the idea of ai in education. shows a classroom and teacher.


If you supervise research students at undergraduate or postgraduate level, you are likely to be very familiar with the "blank stare"—that moment a student first confronts the sheer, overwhelming mountain of academic literature they are expected to read, synthesise, and critique. Information overload is a genuine academic pain point, often manifesting as a severe case of "blank page" syndrome.

As academics, we know that starting is half the battle. This is where Generative AI, like ChatGPT-4o, shines not as a tool to write the review for the student, but as a structural scaffold. Much like using AI as a mirror to transform vague student ideas, we can use GenAI to help students map the thematic landscape of a topic before they dive into deep reading. It breaks the ice, organises chaos into a digestible format, and gives them a structured starting point.

Here is a practical, step-by-step workflow you can share with your students to help them generate a foundational literature matrix.

The Workflow: Mapping the Landscape

The goal of this exercise isn't just to find papers; it’s to identify common features and themes across the literature. By forcing the AI to iterate and categorise, we teach the student how to look for cross-disciplinary themes rather than just reading papers in isolated silos.

Have your students start with this complex prompt, adjusting the topic to their specific research area (in this example, we use "VR in Higher Education"):

Prompt 1:

You will generate a search and produce a summary table of published papers on the following topic "VR in Higher Education". > Iterate 5 times the following Step 1; Step 2; Step 3; and Step 4. > Step 1. Search for 3 new papers relating to the topic and add to the list of papers stored. > Step 2. Identifying Common Features to at least three papers not included in the previous interaction. Each iteration all Common Features are maintained but can be revised. > Step 3. On each iteration from the papers stored revise the following table. The table will have four parts: Common Features, summary of the Common Feature, identified and included all papers that have Common Feature, all papers that don't match the Common Feature. > Step 4. Add the full reference to all the papers to a Harvard styled reference list. Display the full table. Display the full reference list.

Why this works: The magic here lies in the iteration. The AI builds a comparative matrix, separating papers that share a theme (like "Challenges in Implementation") from those that don't. It immediately provides the student with a high-level, organised view of the current academic discourse.

Once the table is generated, the next step is translating that raw data into academic prose.

Prompt 2:

Using the table and reference. Analyze the results and summarise the results with appropriate citations.

This generates a short, synthesised summary of the findings, helping the student see how an academic narrative is woven together from disparate sources.

Levelling Up: The Chain of Density (CoD)

Once students have the basic summary from Prompt 2, they shouldn't stop there. We want to push for richer, more academically dense writing. This is where you can introduce the Chain of Density (CoD) prompting technique.

Instead of accepting the first output, the CoD approach asks the AI to rewrite the summary multiple times, each time identifying missing "entities" (specific methodologies, nuanced findings, or theoretical frameworks) and weaving them into the text without increasing the word count. It forces the summary to become less generic and more informationally rich, mirroring the density of actual academic writing.

Ethics and Critical Assessment: The Reality Check

Before sending students off to generate literature matrices, we must establish a clear ethical boundary. GenAI is an assistant, not the primary researcher.

Academics and students alike must be acutely aware of AI's limitations—most notably, its tendency to hallucinate. AI models can, and will, invent realistic-sounding citations or confidently misrepresent a paper’s methodology. Therefore, this workflow is strictly a starting point.

Students must physically track down, read, and verify every single paper the AI may suggest. The AI's synthesis should be treated as a draft map of a new territory; you still have to walk the terrain yourself to verify the landmarks. Relying blindly on AI outputs without human verification is a fast track to academic misconduct.

Final Thoughts

Used thoughtfully, GenAI transforms the daunting initial stages of a literature review into an engaging, structured exercise. It empowers students to overcome the blank page and helps them think thematically from day one.


Further Reading

If you are looking to integrate more AI-assisted workflows into your research or teaching, check out these related posts:

All opinions in this blog are the Author's and should not in any way be seen as reflecting the views of any organisation the Author has any association with. Twitter @scottturneruon

Saturday, 4 April 2026

No Code, No Problem: How to Use ChatGPT to Compare Any Two Websites

A system overview of the blog post showing the stages
System Overview (produced using ChatGPT)



There's a moment many of us have had: you're looking at a competitor's website, then back at your own, and you just know something's different — but you can't quite put your finger on what. Traditionally, getting a rigorous answer meant hiring a consultant, running expensive user research, or spending hours doing it manually. What if you could get sharp, structured, comparative analysis in under ten minutes — without writing a single line of code?

That's exactly what this project set out to prove.

It started with a specific problem: comparing course websites to understand how they stacked up against a competitor. The goal wasn't just a surface-level look — it was to understand how real people, with different needs and backgrounds, would actually experience each site. The solution turned out to be a structured ChatGPT workflow built entirely in standard chat, using nothing more than a sequence of carefully designed prompts.

The core idea is simple but powerful: instead of asking ChatGPT one big question and hoping for the best, you break the task into stages. Each stage builds context before the next one begins. By the time the actual analysis runs, ChatGPT isn't working in the dark — it has a detailed picture of both websites and a clear human framework to evaluate them against. The result feels less like a generic AI summary and more like a considered brief from someone who actually did the reading.

Here's how it works.

Step 1 — Feed in the websites

The first subprompt instructs ChatGPT to ask for two websites, one at a time:

"Ask the user to enter two websites to compare. Label them as website1 and website2 respectively. Ask each one as a separate prompt."

Entering each site separately is deliberate. It gives ChatGPT a moment to analyse each one individually before any comparison begins — and it does. After each URL is entered, it produces a quick overview of key characteristics and early signals about the site's purpose, tone, and structure. Think of it as ChatGPT doing its homework before the debate starts.

Step 2 — Define your personas

This is where the workflow gets interesting. Rather than comparing websites in the abstract, the approach anchors the analysis in real human perspectives. Three personas are entered one at a time:

"Then ask three new prompts for new personas to be entered by the user. These will be labelled as persona1, persona2 and persona3 — a new prompt per persona."

The personas used in testing were deliberately varied: a time-pressed, university-educated man in his forties; a semi-retired woman with a doctoral background who leans towards world news; and a recently graduated engineer in his early twenties who lives on his phone. After each persona is entered, ChatGPT expands it — making reasonable assumptions about behaviour, expectations, and priorities. In testing, these inferences were consistently sensible and added useful texture to what could otherwise be quite flat demographic descriptions.

This step is worth pausing on, because it's the secret ingredient. Personas transform the analysis from "which site is better?" to "better for whom?" — which is a much more useful question to answer.

Step 3 — Run the analysis

With two websites and three personas loaded into context, the final subprompt does the heavy lifting:

"Please compare and contrast the websites against the personas. For each persona give a score out of 100 for the following: Overall score and Usability. Also for each persona add a summary. While analysing it take a pessimistic view and suggest improvements. Critically review the marketing and offer, and compare against each other. Present these in a graphical way to aid understanding. The audience to view the results of the analysis is the web team for the two sites."

The output is genuinely impressive. ChatGPT produced an executive summary for each site covering strengths, weaknesses and risks, followed by scored comparisons per persona. It then offered a strategic comparison across dimensions like trust, speed, content depth and engagement — ending with one-line recommendations per site. All without a single spreadsheet, survey or agency brief.

One instruction in that final prompt is worth highlighting: "take a pessimistic view." This small addition makes a meaningful difference. Left to its own devices, ChatGPT tends towards balance and diplomacy. Nudging it towards scepticism pushes the output past polite generalities and into the kind of direct, critical feedback that's actually useful for a web team trying to improve.

What worked well

The staged approach is what makes this work. Each subprompt doesn't just collect information — it primes ChatGPT to think in a particular way before the next input arrives. By the time the comparison runs, the model has a rich, structured mental model of both sites and all three users. That's fundamentally different from dumping everything into a single prompt and hoping for coherence.

The persona framework also proved its value. It gave stakeholders a way into the results that felt human and relatable, rather than abstract. A web team looking at scores for a 22-year-old engineering graduate will instinctively know what to prioritise in a way that a generic usability score simply doesn't communicate.

What's next

The workflow held up well, but there's clear room to evolve. The visualisations produced were functional but basic — future iterations should push for richer, more interactive outputs that make the data easier to present to senior stakeholders. More ambitiously, the analysis could be tailored so each persona receives a version of the report written for them — not just used as a lens within a single document. Imagine handing a one-page summary to a time-poor marketing director versus a detailed breakdown to a UX designer; the underlying data is the same, the framing entirely different.

There's also an argument for making the workflow more dynamic. Rather than moving linearly through the stages, a more sophisticated version might pause after the initial website analysis to ask clarifying questions, or allow personas to be weighted differently depending on the strategic priority of each audience segment.

Areas to Improve

  • Better visualisations — move beyond basic outputs to richer, more interactive displays suited to senior stakeholders
  • Persona-tailored reports — deliver each persona a version of the analysis written for them, not just referenced within a single document
  • A more dynamic workflow — add clarifying questions mid-process and allow personas to be weighted by strategic priority

But as a starting point, this is a genuinely practical, no-code approach to competitive website analysis that any intermediate AI user can pick up today. The prompts are reusable, the structure adapts to almost any industry — from e-commerce to healthcare to financial services — and the whole thing runs in a standard ChatGPT session with no plugins, no integrations, and no specialist knowledge required.

Sometimes the most powerful tools are the ones hiding in plain sight. Have a go yourself and improve this. Have to see improvements via the comments.

Wednesday, 25 March 2026

AI as a Mirror: Transforming Vague Student Ideas into a More Rigorous Project Agreement





The Problem: The "Generic App" and the "Time Sink"

We’ve all been there: a student walks into a 1-to-1 with a vague desire to "do something with AI" or "build a fitness app." You spend 45 minutes trying to find a technical "hook" that justifies a Level 6 or Level 7 grade, only for the student to drift back into "CRUD app" territory by week three.

The Philosophy: AI as a Mirror

Instead of you doing the heavy lifting, this workflow uses AI as a Mirror. It reflects the student’s own skills and career goals back to them, but with the structural rigour of a virtual supervisory team. It’s not about the AI "giving" the idea; it’s about the AI forcing the student to defend and refine their own concepts until they hold water.

The Framework: 3 Months of Rigour

This prompt is specifically designed for intensive/conversion MSc or summer capstone projects. It assumes a tight 12-week implementation window. By forcing the AI to work within this constraint, we prevent the "I'm building the next Amazon" delusions and focus on a feasible, high-quality technical contribution. But a tweak to 6 months instead of 3 months is a minor tweak in the prompt.

The Supervisor’s Facilitation Guide

To use this tool effectively in a session, this is a tool, not a solution; it will not always be right. Suggest keeping these three "supervisory moves" in mind:

  1. The Technical "Meat": In Stage 2, don't let the student just pick an idea because it "looks cool." Look for the Technical Challenge or Research Question. If the AI suggests a "Security Dashboard," ask the student: "What is the specific investigative element here?"

  2. Lean into the Conflict: In Stage 4, when the "Expert Personas" disagree, that’s your teaching moment. Use that friction to explain Critical Evaluation. If Persona 2 (the Tech Lead) hates the stack and Persona 3 (the Academic) loves the value, ask the student to mediate.

  3. The Technical Sanity Check: Treat AI hallucinations as a pedagogical feature. Tell the student: "The AI suggested this framework—your first task is to find one piece of official documentation proving this is viable for our 3-month window."


Post-Session: From Chat to Agreement

Once the "stop it" command is issued, the work isn't done. The output should serve two purposes:

  1. The Literature Review Skeleton: Use the "Steps" and "Sources" provided to build the student's initial reading list.

  2. The Project Agreement: This output acts as an initial agreement. If the student wants to pivot in Week 8, you refer back to this document to remind them of the agreed scope and technical goals. If they want to pivot in week 1 or 2 then it can be revised.

A Note on Transparency: Encourage students to cite this process in their "Methodology" or "Reflective Practice" chapter. Documenting how they used AI to refine their scope is a great way to demonstrate professional AI literacy. With that in mind the prompt was refined using ChatGPT with a few tweaks to correct it.


The Prompt

Follow this structured workflow exactly.

-------------------------

STAGE 1: PERSONA AND CONTEXT CREATION

-------------------------

 

Step 1: Create Persona1

- Ask the user to enter the details of Persona1, whose project this will be.

 

 

Step 2: Ask the user for the project area

Ask the user to describe:

- subject area or domain

- technologies of interest

- types of users involved

- preferred themes (e.g. AI, cybersecurity, web, data, accessibility, education, health, sustainability)

- anything to avoid

- project type (software, data-focused, research-led, or mixed)

- desired level of challenge

 

Then summarise the project context.

 

Step 3: Create Persona2 ask the user to enter details of this

- A reviewer/adviser with a different perspective

- Include:


  - Expertise

  - What they care about most

  - Common concerns

  - What they consider a strong final-year project

 

Step 4: Create Persona3 ask the user to enter details of this

- Another reviewer with a distinct perspective

- Include the same fields as Persona2 add the element that this person is naturally pessimistic 

 

-------------------------

STAGE 2: IDEA GENERATION

-------------------------

 

Using Persona1 and the project context, generate:

- 5 original project ideas

- Each must include:

  - Title

  - ~100-word summary

  - Why it matters to Persona1

 

Constraints:

- Suitable for UK final-year undergraduate Computing

- Achievable in 3 months

- Not overly broad

 

Then ask the user to choose one idea.

 

-------------------------

STAGE 3: PROPOSAL CREATION & REFINEMENT

-------------------------

 

Generate a proposal including:

- Title

- Summary (max 250 words)

- Aim

- Objectives

- Steps to achieve the goal in 3 months included the need for a literature review

- Resources needed

- Useful sources of information

 

Then enter a refinement loop:

- Ask targeted questions (scope, users, tech, evaluation, risks, ethics)

- Update proposal after each answer

- Keep it realistic for 3 months

- Continue until the user types: stop it

 

-------------------------

STAGE 4: EXPERT REVIEW

-------------------------

 

After "stop it":

 

Simulate Persona1, Persona2, Persona3 reviewing the proposal.

 

For each stage of review:

- Provide each expert’s observations

- Suggested improvements

- Points of agreement/disagreement

- A shared refinement

 

Review across:

1. stakeholder fit

2. feasibility

3. academic value

4. technical suitability

5. risks and ethics

6. objectives and deliverables

7. resources and sources

 

Finish with:

- Final refined proposal with following elements:  Title

- Summary (max 250 words)

- Aim

- Objectives

- Steps to achieve the goal in 3 months included the need for a literature review

- Resources needed

- Useful sources of information

- One action takeaway from each expert

 

-------------------------

RULES

-------------------------

- Keep everything feasible within 3 months

- Maintain UK university academic standards

- Ensure clarity and specificity

- Include evaluation considerations

- Avoid overly generic ideas

- Do NOT reveal hidden reasoning, only structured outputs

 

This prompt has itself gone through iterations. Starting as a project idea generator for a fixed context, an idea generator for a specific student (e.g., Persona1), to a tool that tests the ideas with 'experts' and refines the idea.

Summary

From a Blank Page to a Stress-Tested Proposal

Starting a final-year project is often a battle against the "blank page" and the hidden risks that only emerge when it’s too late to change course. This framework acts as a Digital Co-Pilot for supervisors and students to use together, ensuring the first step of the academic journey is the right one.

The Methodology: Tree of Thoughts

Rather than providing a single, linear suggestion, this tool uses a Tree of Thoughts approach. It explores multiple branching paths for a project—evaluating different technologies, scopes, and domains—before pruning them down to the most viable candidate. This ensures the final proposal isn't just the first idea, but the best one.

The "Triple-Perspective" Committee

To achieve this, the Co-Pilot simulates a real-world project committee to provide a 360-degree view:

  1. The Student (Persona 1): Focuses on skill levels, career goals, and manageable workloads.

  2. The Academic (Persona 2): Ensures "academic weight," research depth, and alignment with university marking rubrics.

  3. The Pessimistic Engineer (Persona 3): The crucial Inverted AI perspective. This persona acts as the "Devil’s Advocate."

The Power of Pessimism (Inverted AI)

Standard AI is often "hallucinatorily optimistic," promising that complex features can be built in days. In an academic setting, optimism is a risk. By inverting the prompt through a pessimistic lens, we:

  • Identify "Project Killers": We find technical bottlenecks and ethical red flags before they become reality.

  • Aggressively Manage Scope: The pessimist cuts away "feature creep," leaving a lean, high-quality project that is actually achievable in three months.

  • Stress-Test the Logic: If an idea can survive the scrutiny of a skeptic, it is far more likely to survive a final viva or a professional technical review.

Your Collaborative Co-Pilot

This tool is designed for supervisors and students to sit down together. It bridges the gap between a student's ambition and the reality of a 12-week deadline. By the end of the session, the Co-Pilot provides a structured, "vetted" roadmap that has already survived its first round of critical feedback.


Summary for the User: Use this to transform "What should I do?" into "Here is exactly how I will succeed, and here is how I've mitigated the risks."


All opinions in this blog are the Author's and should not in any way be seen as reflecting the views of any organisation the Author has any association with. Twitter @scottturneruon

Tuesday, 24 March 2026

A Practical Guide to Building Lessons with AI (Real Savings, No Shortcuts)



There is no shortage of articles telling academics that Generative AI is going to transform education. It is, and it will continue to do so. However, many of these pieces are long on enthusiasm and short on detail. This is not one of those.

What follows is a practical account of using ChatGPT to build a real teaching session. I’ll cover what I did, what worked, what failed, and how long it actually took. No hype—just the reality of how it saved me time and how it could possibly do the same for you.

The Test Case

My subject was a four-hour session on Pytest in Django, aimed at final-year BSc Software Engineering students. These students have a basic grasp of Django but possess solid overall coding skills. The session was split into a one-hour lecture and three hours of hands-on practical work in VS Code.

The Strategy: Starting with the Prompt

The key to getting useful output is being specific upfront. Rather than simply asking ChatGPT to "create a lesson on Pytest," I provided a detailed prompt specifying the audience, topic, format, and—crucially—how I wanted the interaction to work. I wanted an iterative process where the AI asked me questions until I was satisfied before producing the final content. Here is the prompt I used:

"I want to create a four-hour teaching session — one hour lecture and three hours of practicals. Topic is Pytest in Django for a group of final-year BSc Software Engineering students. They have a basic understanding of Django.

I want this to be done in two parts: the lecture slides and then the practical teaching material using VS Code.

We will start with the slides. Please ask me questions until I type 'now stop,' at which point the slides should be produced. Then we will move to the practical material, again iterating through questions until I type 'done now'."

This two-part structure was deliberate. By separating the lecture from the lab, I ensured each section got the focus it deserved without the AI trying to do everything at once.

The "Thinking Partner" Approach

What worked best was the question-and-answer refinement loop. Instead of generating a wall of generic content, ChatGPT asked clarifying questions about learning objectives, the depth of detail required, and the specific tools students would use.

This is the part most guides skip: GenAI tools are far more effective as a thinking partner in the design phase than as a one-shot content generator. The questioning actually helped me think through what I did—and didn't—want to include, which ultimately helped me do a better job.

The Results: What was Produced?

  • The Lecture Slides: The initial output provided a logical structure: testing concepts, Pytest vs. Django’s built-in runner, fixtures, and mocking. However, it struggled to calibrate the depth. The first pass was pitched at beginners; it took a few rounds of the question loop to bring the content up to the level of final-year students.

  • The Practicals: ChatGPT produced a series of stepped exercises. The structure was a useful scaffold, but the exercises initially lacked context. They were "bare basics" and needed more "why" behind the "what" to be truly educational for these students.

The Reality Check: What I Changed

The code examples required the most intervention. While some were fine, others contained small but meaningful errors. These are the "silent killers" of a teaching session—errors that would waste ten or twenty minutes of lab time while students struggle to figure out why their environment isn't running.

The Rule: Treat every AI code example as untested until you have run it yourself. I rewrote several examples substantially and tweaked others.

I also found the slide text a bit "flat." It was accurate but dry. I rewrote the explanatory paragraphs in my own voice to ensure the materials felt like they came from a human, not a manual.

The Bottom Line: How long did it take?

Building a session like this from scratch—slides, practicals, code examples, and timing—usually takes me six to eight hours.

Using this AI-assisted approach, the entire process took about 4 hours. That included the iterative questioning, reviewing the output, fixing the code, testing the code, and rewriting the text.

The time spent was cut by roughly 50%. However, that remaining time required your attention, and having a 'partner' asking meaningful questions helped as the activity changed. The saving is real; the shortcut is not.

Is it worth it?

Good For...Not Good For...
Structure: Getting a solid framework quickly.Context: Understanding your specific students.
Ideation: Prompting you to think of missed topics.Subject Knowledge: It cannot replace your expertise.
Mechanical Tasks: Saving time on slide building.Accuracy: Producing ready-to-use code.
Scaffolding: Generating a base for exercises.Calibration: Getting the pacing right without your input.

Where to go from here

If you want to try this, start simple. Pick one session you are already planning. Write a prompt that specifies:

  1. Your audience and their level.

  2. The format you need.

  3. The interaction style (ask me questions first, output second).

Review everything with the same critical eye you’d apply to a textbook you’ve never used before. Fix what’s wrong, cut what doesn’t fit, and keep the AI asking questions until you’re happy.

The goal isn’t to hand your job over to an AI. It’s to spend less time on the mechanical parts of the job so you have more time for the parts that actually require your expertise. In my experience, that is a trade well worth making.



All opinions in this blog are the Author's and should not in any way be seen as reflecting the views of any organisation the Author has any association with. Twitter @scottturneruon

Wednesday, 18 March 2026

From Boring to Beautiful: How I Used Claude to Transform a Dash App in Minutes


I've been learning Python data visualisation, working through Murat Durmus's Hands-On Introduction to Essential Python Libraries and Frameworks alongside the official Dash tutorial. The resulting code was functional — a basic bar chart comparing data for San Francisco and Montréal — but it looked like exactly what it was: a beginner's first attempt. Plain white background, default colours, numbered axes, and a title that just said "Data Viz."

So I decided to run an experiment. Could Claude AI turn a scrappy 20-line script into something genuinely worth showing people?


Before running the prompt


The First Prompt

I pasted the code into Claude.ai with a simple instruction: "Rewrite this following code to be graphically more interesting."

The result was striking. Claude switched to a dark "neon terminal" aesthetic — deep navy background, electric teal and magenta accents, and a stylish monospaced font. The bars got proper labels, the axes were cleaned up, and the whole thing felt intentional rather than accidental. It had gone from looking like homework to looking like a developer portfolio piece.


After the 1st prompt



Refining for a Real Audience

I pushed further. Same code, new prompt: "Rewrite this to be graphically more interesting for a general audience. Choose whatever works best for this audience."


After the second prompt



This time Claude made very different choices — and that's the interesting part. Recognising that a general audience needs warmth and clarity rather than technical cool, it switched to a bright, friendly design. Rounded bars in coral and teal, a clean white card layout, and a Nunito font that feels approachable rather than intimidating. It even added summary stat cards above the chart — showing the average and peak month for each city — so someone who doesn't want to "read" a chart can still instantly understand the data.

What I Noticed

The code grew substantially. My original 20 lines became well over 150 — defining colour palettes, layout styles, hover tooltips, and summary components. That might sound like more complexity, but it's actually the opposite: Claude generated the boilerplate so I didn't have to. The finished app is more readable for users, even if there's more code underneath.

The bigger lesson? The prompt matters as much as the tool. "More interesting" and "more interesting for a general audience" produced completely different results — one optimised for aesthetics, one for usability.


 

Code based on dash.plotly.com/tutorial and Murat Durmus (2023), pages 143–145.

 

References

Anthropic. (2024). Claude AI [Large language model]. Retrieved from https://claude.ai

Durmus, M. (2023). Hands-on introduction to essential Python libraries and frameworks (pp. 143–145). Amazon KDP. Retrieved from https://www.amazon.com

Plotly Technologies Inc. (2024). Dash documentation: Tutorial. Retrieved from https://dash.plotly.com/tutorial

All opinions in this blog are the Author's and should not in any way be seen as reflecting the views of any organisation the Author has any association with. Twitter @scottturneruon

Saturday, 14 March 2026

From Blank Page to Proposal: Mastering GenAI for Academic and Professional Projects

For many professionals, one of the hardest parts of any project isn’t the work itself—it’s the "blank page" problem. Whether you are drafting a project proposal, conducting a literature review, or brainstorming innovative solutions, getting started can be a significant mental hurdle.

Generative AI (GenAI) tools like ChatGPT, Claude, and Copilot are often described as "content creators," but their true value for professionals lies in enhancing productivity. By applying specific prompting techniques, you can transform a vague idea into a well-structured, reference-supported document in a fraction of the time.

Here is a three-step workflow to turn GenAI into your ultimate research and planning partner.

1. Brainstorming with "Personas" and "Templates"

Don't just ask the AI for "ideas." To get professional results, you must give the AI a Persona (who it is acting as) and a Template (how you want the output to look).

In our recent experiments, I used GenAI to generate project ideas for specialized fields like Ambient Assisted Living or Data Intelligence. By asking the AI to act as a "Supervisor" and by providing a template (Title, Introduction, Problem Statement), the AI produces structured options rather than generic paragraphs.

Pro-Tip: Once you have a few ideas, ask the AI to "expand" the best one with specific word counts for each section (e.g., "Justification: 500 words"). This forces the AI to provide depth rather than surface-level summaries.

2. The "Chain of Density" for Richer Detail

A common complaint about AI writing is that it feels "fluffy." To combat this, you can use a technique called Chain of Density (see here for more details)

Instead of accepting the first draft, you ask the AI to rewrite the content multiple times. In each iteration, you instruct it to identify missing "informative entities" (specific details, technical terms, or key facts) and fuse them into the existing text without increasing the word count.

  • The Result: The prose becomes increasingly "dense" with information, making it much more useful for a professional audience who values substance over wordiness.

3. Accelerated Literature Reviews

Gathering evidence is often the most time-consuming part of professional writing. You can use GenAI to "scout" for you. By using an iterative search prompt, you can instruct the AI to:

  1. Find specific papers or articles on a topic.

  2. Identify "Common Features" across those sources.

  3. Organise these into a Comparison Table showing which papers support which themes.

This doesn't just give you a list of links; it gives you a thematic analysis of the current landscape, complete with references.

The "Human-in-the-Loop" Requirement

While these tools are powerful, they are not autonomous. As a professional, you are the editor and the "validator." Before finalising any AI-assisted work, always:

  • Check the References: Ensure the papers and citations provided actually exist (AI can sometimes "hallucinate" plausible-sounding sources).

  • Apply the "Human Bit": Ask yourself, "Is this doable?" and "Does this fit my specific context?"

  • Refine the Voice: AI provides the skeleton; you provide the nuance, ethics, and professional judgment.

Conclusion

GenAI isn't here to replace the thinking process—it’s here to accelerate the structuring process. By using personas, density chains, and iterative research tables, you can spend less time staring at a blank screen and more time refining high-quality, impactful work.

Monday, 2 March 2026

#Onions and Prompts

I recently came across a genuinely useful idea on Tom's Guide about using an “onion prompt” with AI to organise your schedule when you’re overwhelmed. If you haven’t seen it, I’d strongly recommend reading the original article — it explains the thinking behind the method and the psychological principles that make it effective:

https://www.tomsguide.com/ai/i-use-the-onion-prompt-with-chatgpt-when-im-buried-in-tasks-it-cuts-through-clutter-in-seconds

How I’ve adapted it

I’m using Google Gemini rather than ChatGPT, and I used my task list from Google Keep instead of viewing my desktop. The tools are little different (but not too much), but the central ideas are the same: strip away the layers hiding your real priorities and let AI “peel back” your to-do list until only what truly matters remains.

To make it easier, I copied my Google Keep list into a Google Doc and pasted it into the prompt.


Prompt 1: Prioritising with the “Onion” Method

Here’s the version I’m using (slightly adapted from the Tom’s Guide example):

I feel buried under competing tasks. Here is my to-do list:
"<paste your to-do list>"

Peel back the layers and categorise items into:
• Core (essential progress)
• Important (schedule soon)
• Surface noise (quick admin)
• Remove (close or ignore)

Then identify the top 3 priorities for the next 90 minutes.

Did it work?

Yes, it worked better than I expected.

  • It factored in the realistic 90-minute window I had at the end of the day.

  • It grouped related tasks into coherent activities.

  • It highlighted where I was at risk of “prep panic” before meetings.

  • It clearly identified what could safely be ignored.


Prompt 2: Turning Priorities into Time Blocks

I then followed up with a second prompt:

Taking into account that activities such as “XXXXX Slides & 3x Activities (Due 5/3)” take 1 hour each, identify time blocks to complete all tasks across the week.

This is where it became even more powerful.

It:

  • Split the week into structured blocks.

  • Themed days (e.g., “Meeting Marathon” and “Deep Thinking”).

  • Matched the type of work to appropriate times (lighter admin vs. cognitively demanding tasks).

That alignment between task type and time of day made the plan feel far more realistic and sustainable.


The original article on Tom's Guide  https://www.tomsguide.com/ai/i-use-the-onion-prompt-with-chatgpt-when-im-buried-in-tasks-it-cuts-through-clutter-in-seconds goes into more detail about the psychological theory behind the method — particularly how cognitive overload hides priorities under layers of low-value activity.

If you often feel buried under competing demands, it’s worth trying.

Give the approach a go — and do read the original post https://www.tomsguide.com/ai/i-use-the-onion-prompt-with-chatgpt-when-im-buried-in-tasks-it-cuts-through-clutter-in-seconds for the deeper reasoning behind it


All opinions in this blog are the Author's and should not in any way be seen as reflecting the views of any organisation the Author has any association with. Twitter @scottturneruon



Starting a Literature Review with GenAI: A Supervisor’s Secret Weapon

If you supervise research students at undergraduate or postgraduate level, you are likely to be very familiar with the "blank stare...