Skip to main content

Prompt Engineering strategies



Two recent papers Eliot, L. (2023) and White, J. et al. (2023) have listed strategies for prompt engineering. 

Taking Eliot, L (2023) paper categories different types of prompts to get different actions or outputs. The key point I feel is that LLMs are powerful tools that are a lot more than search tools, ie. "Tell me the answer to this". Selecting our prompts we can get LLMs to be closer to a tool for enhancing our creativity and productivity.

Here's a table summarizing the prompt engineering strategies from Eliot (2023) and their summaries:

StrategySummary
Imperfect PromptingUsing intentionally imperfect prompts to generate creative or unexpected outcomes.
Persistent Context and Custom Instructions PromptingSetting a persistent context or providing custom instructions to tailor AI responses.
Multi-Persona PromptingDirecting AI to adopt one or multiple personas for role-play or perspective exploration.
Chain-of-Thought (CoT) PromptingRequesting AI to process and explain reasoning stepwise for clearer responses.
Retrieval-Augmented Generation (RAG) PromptingEnhancing AI responses with specialized knowledge from external databases.
Chain-of-Thought Factored Decomposition PromptingBreaking down complex prompts into simpler questions for structured reasoning.
Skeleton-of-Thought (SoT) PromptingStarting with an outline for the response and expanding into detailed content.
Show-Me Versus Tell-Me PromptingBalancing between showing examples and telling explicit instructions in prompts.
Mega-Personas PromptingCreating extensive personas to explore diverse perspectives or aggregate insights.
Certainty and Uncertainty PromptingExplicitly addressing the level of certainty in AI responses.
Vagueness PromptingEmploying vague prompts to encourage AI to interpret or fill gaps.
Catalogs/Frameworks for PromptingUtilizing frameworks to approach prompt construction systematically.
Flipped Interaction PromptingInverting interaction patterns by having AI ask questions.
Self-Reflection PromptingEncouraging AI to assess its own responses or reasoning.
Add-On PromptingLeveraging tools to aid in prompt creation or modification.
Conversational PromptingEngaging AI in fluent dialogues beyond transactional exchanges.
Prompt-To-Code PromptingUsing AI to generate or assist with coding tasks.
Target-Your-Response (TAYOR) PromptingSpecifying the desired form or characteristics of AI responses within the prompt.
Macros and End-Goal PromptingApplying macros for repeated scenarios and focusing on ultimate interaction goals.
Tree-of-Thoughts (ToT) PromptingEncouraging AI to explore multiple reasoning paths before selecting a response.
Trust Layers for PromptingImplementing layers to ensure reliability and appropriateness of AI responses.
Directional Stimulus Prompting (DSP)Incorporating hints to subtly guide AI towards desired responses.
Privacy Invasive PromptingComposing prompts cautiously to avoid sharing sensitive information.
Illicit or Disallowed PromptingAvoiding prompts that lead to prohibited or unethical AI uses.
Chain-of-Density (CoD) PromptingCondensing complex information into concise inputs for AI summarization.
“Take A Deep Breath” PromptingUsing metaphorical phrases to imply contemplation in AI processing.
Chain-of-Verification (CoV) PromptingMethodically verifying AI responses to ensure accuracy and relevance.
Beat the “Reverse Curse” PromptingAddressing AI's limitations in understanding the reverse of information.
Overcoming “Dumbing Down” PromptingUtilizing detailed inputs to leverage AI's capabilities fully.
DeepFakes to TrueFakes PromptingEthically creating digital personas, distinguishing between harmful and legitimate uses.
Disinformation Detection and Removal PromptingEmploying AI for identifying and addressing disinformation.
Emotionally Expressed PromptingUsing emotionally charged wording in prompts to enhance AI responses.


The paper by White et al (2023) does a similar thing but with a more Software Engineering style, thinking about that there are effectively design patterns for prompts we can 'model' and use. But as in the Eliot (2023) paper it again is about  harnessing the power of LLMs to improve our productivity rather than just tell me the answer.

PatternSummary
Meta Language CreationFocuses on creating a custom language or set of instructions that LLMs can understand, enhancing the interpretation of input to generate desired outputs. This pattern is particularly useful when standard input language is not efficient for conveying specific ideas to the LLM.
Output AutomaterAllows users to create scripts or commands that automate tasks suggested by the LLM's outputs. This pattern tailors the LLM's output to initiate actions or processes automatically, based on the content generated.
PersonaGives the LLM a specific persona or role, altering its output to match the characteristics or speech pattern of the designated role. This can be used to make interactions more engaging or tailored to specific contexts.
Visualization GeneratorEnables the generation of visual outputs by producing text that can be fed into visualization tools. This pattern is instrumental when concepts or data are better understood visually, and the LLM is tasked with generating the descriptive text that a visualization tool can transform into graphical representations.
RecipeProvides a sequence of steps or actions to achieve a specific goal, organizing the LLM's output into a structured procedure or guideline. This pattern can be applied to instructional content, where the output needs to guide users through a series of actions.
TemplateEnsures the LLM's output follows a specific format or template, which can be crucial for outputs that must adhere to predefined structures, such as code snippets or formatted documents. This pattern helps maintain consistency and adherence to standards in the generated content.
Fact Check ListGenerates a list of facts or statements that need verification, highlighting elements in the output that should be fact-checked. This pattern is valuable for improving the reliability and credibility of information provided by the LLM.
Question RefinementFocuses on improving the quality of the input by suggesting better versions of the user's question, leading to more precise and useful outputs. This pattern enhances the interaction with the LLM by refining the input for clarity and specificity.
Alternative ApproachesSuggests different methods or strategies for achieving a task, providing the user with various options based on the LLM's knowledge. This pattern enriches the decision-making process by offering multiple solutions to a problem.
Cognitive VerifierInstructs the LLM to generate a series of sub-questions related to the main query, encouraging a step-by-step approach to problem-solving that can lead to more thorough and reasoned outputs.
Refusal BreakerAims to reword or rephrase questions that the LLM initially refuses to answer, finding alternative ways to elicit information or responses from the LLM. This pattern is useful for overcoming limitations or restrictions in the LLM's programmed responses.
Flipped InteractionSwitches the roles, requiring the LLM to ask questions instead of generating outputs. This can be used for educational purposes, quizzes, or situations where prompting the user for more information is necessary.
Game PlayGenerates output in the form of a game, involving the user in interactive and engaging content. This pattern leverages the LLM's capabilities to create entertaining or educational games based on textual input.
Infinite GenerationProduces continuous output without requiring the user to re-enter prompts, useful for generating stories, dialogues, or any content where ongoing generation is desired.
Context ManagerControls the contextual background information that the LLM uses to generate its output, ensuring that the content remains relevant and accurate within a given context. This pattern is crucial for maintaining coherence and relevance in longer conversations or complex scenarios where the LLM needs to retain and apply context-specific knowledge.


Here's a new table summarizing the strategies and patterns that are similar between the two discussions, along with a summary of why they align:

Strategy/Pattern

Similarities Summary

Persona & Multi-Persona Prompting

Both involve assigning a specific character or role to the LLM to tailor its responses. This strategy enhances engagement and customizes the interaction based on the assumed persona, making it useful for role-based simulations or to emulate specific voices.

Template & Output Automater/Recipe

Template strategy ensures outputs follow a specific format, similar to how the Output Automater pattern can script tasks based on the LLM's output, and the Recipe pattern provides structured procedures. These approaches structure the information output for consistency and actionability.

Flipped Interaction

Directly corresponds to flipping the interaction role where the LLM asks questions, promoting user engagement or gathering additional input, similar to educational uses, quizzes, or exploratory discussions.

Chain-of-Thought (CoT) Prompting & Cognitive Verifier

CoT involves stepwise problem-solving and explanation, closely related to the Cognitive Verifier pattern that generates sub-questions for a deeper analysis. Both enhance the LLM's reasoning transparency and the thoroughness of the response.

Alternative Approaches & Visualization Generator

While not directly similar, both strategies involve exploring different methods or representations for achieving a task or explaining a concept. Alternative Approaches offer multiple solutions, and Visualization Generator transitions textual descriptions to visual outputs for enhanced understanding.



Eliot, L. (2023) ‘Must-Read Best Of Practical Prompt Engineering Strategies To Become A Skillful Prompting Wizard In Generative AI’, Forbes, 28 December. Available at: https://www.forbes.com/sites/lanceeliot/2023/12/28/must-read-best-of-practical-prompt-engineering-strategies-to-become-a-skillful-prompting-wizard-in-generative-ai/?sh=20b2c07f19cd (Accessed: 15 March 2024).

White, J. et al. (2023) ‘A Prompt Pattern Catalog to Enhance Prompt Engineering with ChatGPT’. Available at: https://arxiv.org/abs/2302.11382v1 (Accessed: 18 March 2024).


All opinions in this blog are the Author's and should not in any way be seen as reflecting the views of any organisation the Author has any association with. Twitter @scottturneruon

Comments

Popular posts from this blog

GenAI Productivity: Ideas to project proposal 1

One of the ways I use Generative AI with students is to take basic ideas for projects, usually a title, and get these tools to greater ideas and start of a project proposal. This is with all the usual caveats  Check the references (if any); It is going to be basic, so extend it. In this example I am going to use Co-pilot but the ChatGPT, etc can be used, employing a few basic prompt engineering basics: personas (who is the target audience?) and Templates (how do I want it to look?) to start this process. Example:  Project ideas for MSc Data Intelligence students (persona)  on a particular topic. The reply will include subheadings and relevant (hopefully) content for  TITLE, INTRODUCTION, PROBLEM STATEMENT. The prompt: " Taking the topic "Leveraging open-source tools to measure and present academics publications automatically from public domain data.". Give five innovative projects for a Master's level student dissertation in Data Intelligence. Each project example wi...

Analysis of Websites using Generative AI - compare poltical websites usability

Image created using DALL-E - love the bad spelling I want to explore using Generative AI to explore and compare websites. So I used using Google Gemini because of it its ability to work easily with websites.I choose UK three political party websites purely to compare like with like. Prompt 1: Analyse the following webpages website 1 https://www.conservatives.com/ , website 2 https://www.libdems.org.uk/  and website 3 https://labour.org.uk/ from a web user's perspective. For each website produce a report containing 2 tables. The first Table list issues with the site; for each issues provide at least three examples and then provide a list of potential solutions. Table 2 strengths of the site with each strength provide at least three examples. Add in any commentary Results It produced two tables per website and provided a summary comparing them at the end. Direct political statements were not produce. Now trying out personas and testing the website the politics side not filtered out ...

Creating a 'cartoon' with GenAI - using Google's Gemini

In a previous post  https://llmapplied.blogspot.com/2024/06/creating-cartoon-with-genai.html I played with using ChatGPT4o to produce a GIF based cartoon. In this post I will look at using Google's Gemini to go someway to do this. there will be differences though as Gemini can not - Produce images but can find them online -Can't combine them into a GIF so that would have to be done outside of this. So the prompt The language is British English. You are an experienced comic book designer and a witty writer. Create a guide to being a Computing student in Higher Education in the UK using a comic book narrative. You will use images you find on the web to do this. This should be educational and entertaining. .  The result A Guide to Computing Student Life Panel 1 Opens in a new window www.dreamstime.com student sitting at a computer, surrounded by books and coffee, looking overwhelmed Student: "So, you've chosen to study computing. Brave soul!" Panel 2 Opens in a ne...