Two recent papers Eliot, L. (2023) and White, J. et al. (2023) have listed strategies for prompt engineering.
Here's a table summarizing the prompt engineering strategies from Eliot (2023) and their summaries:
Strategy | Summary |
---|---|
Imperfect Prompting | Using intentionally imperfect prompts to generate creative or unexpected outcomes. |
Persistent Context and Custom Instructions Prompting | Setting a persistent context or providing custom instructions to tailor AI responses. |
Multi-Persona Prompting | Directing AI to adopt one or multiple personas for role-play or perspective exploration. |
Chain-of-Thought (CoT) Prompting | Requesting AI to process and explain reasoning stepwise for clearer responses. |
Retrieval-Augmented Generation (RAG) Prompting | Enhancing AI responses with specialized knowledge from external databases. |
Chain-of-Thought Factored Decomposition Prompting | Breaking down complex prompts into simpler questions for structured reasoning. |
Skeleton-of-Thought (SoT) Prompting | Starting with an outline for the response and expanding into detailed content. |
Show-Me Versus Tell-Me Prompting | Balancing between showing examples and telling explicit instructions in prompts. |
Mega-Personas Prompting | Creating extensive personas to explore diverse perspectives or aggregate insights. |
Certainty and Uncertainty Prompting | Explicitly addressing the level of certainty in AI responses. |
Vagueness Prompting | Employing vague prompts to encourage AI to interpret or fill gaps. |
Catalogs/Frameworks for Prompting | Utilizing frameworks to approach prompt construction systematically. |
Flipped Interaction Prompting | Inverting interaction patterns by having AI ask questions. |
Self-Reflection Prompting | Encouraging AI to assess its own responses or reasoning. |
Add-On Prompting | Leveraging tools to aid in prompt creation or modification. |
Conversational Prompting | Engaging AI in fluent dialogues beyond transactional exchanges. |
Prompt-To-Code Prompting | Using AI to generate or assist with coding tasks. |
Target-Your-Response (TAYOR) Prompting | Specifying the desired form or characteristics of AI responses within the prompt. |
Macros and End-Goal Prompting | Applying macros for repeated scenarios and focusing on ultimate interaction goals. |
Tree-of-Thoughts (ToT) Prompting | Encouraging AI to explore multiple reasoning paths before selecting a response. |
Trust Layers for Prompting | Implementing layers to ensure reliability and appropriateness of AI responses. |
Directional Stimulus Prompting (DSP) | Incorporating hints to subtly guide AI towards desired responses. |
Privacy Invasive Prompting | Composing prompts cautiously to avoid sharing sensitive information. |
Illicit or Disallowed Prompting | Avoiding prompts that lead to prohibited or unethical AI uses. |
Chain-of-Density (CoD) Prompting | Condensing complex information into concise inputs for AI summarization. |
“Take A Deep Breath” Prompting | Using metaphorical phrases to imply contemplation in AI processing. |
Chain-of-Verification (CoV) Prompting | Methodically verifying AI responses to ensure accuracy and relevance. |
Beat the “Reverse Curse” Prompting | Addressing AI's limitations in understanding the reverse of information. |
Overcoming “Dumbing Down” Prompting | Utilizing detailed inputs to leverage AI's capabilities fully. |
DeepFakes to TrueFakes Prompting | Ethically creating digital personas, distinguishing between harmful and legitimate uses. |
Disinformation Detection and Removal Prompting | Employing AI for identifying and addressing disinformation. |
Emotionally Expressed Prompting | Using emotionally charged wording in prompts to enhance AI responses. |
Pattern | Summary |
---|---|
Meta Language Creation | Focuses on creating a custom language or set of instructions that LLMs can understand, enhancing the interpretation of input to generate desired outputs. This pattern is particularly useful when standard input language is not efficient for conveying specific ideas to the LLM. |
Output Automater | Allows users to create scripts or commands that automate tasks suggested by the LLM's outputs. This pattern tailors the LLM's output to initiate actions or processes automatically, based on the content generated. |
Persona | Gives the LLM a specific persona or role, altering its output to match the characteristics or speech pattern of the designated role. This can be used to make interactions more engaging or tailored to specific contexts. |
Visualization Generator | Enables the generation of visual outputs by producing text that can be fed into visualization tools. This pattern is instrumental when concepts or data are better understood visually, and the LLM is tasked with generating the descriptive text that a visualization tool can transform into graphical representations. |
Recipe | Provides a sequence of steps or actions to achieve a specific goal, organizing the LLM's output into a structured procedure or guideline. This pattern can be applied to instructional content, where the output needs to guide users through a series of actions. |
Template | Ensures the LLM's output follows a specific format or template, which can be crucial for outputs that must adhere to predefined structures, such as code snippets or formatted documents. This pattern helps maintain consistency and adherence to standards in the generated content. |
Fact Check List | Generates a list of facts or statements that need verification, highlighting elements in the output that should be fact-checked. This pattern is valuable for improving the reliability and credibility of information provided by the LLM. |
Question Refinement | Focuses on improving the quality of the input by suggesting better versions of the user's question, leading to more precise and useful outputs. This pattern enhances the interaction with the LLM by refining the input for clarity and specificity. |
Alternative Approaches | Suggests different methods or strategies for achieving a task, providing the user with various options based on the LLM's knowledge. This pattern enriches the decision-making process by offering multiple solutions to a problem. |
Cognitive Verifier | Instructs the LLM to generate a series of sub-questions related to the main query, encouraging a step-by-step approach to problem-solving that can lead to more thorough and reasoned outputs. |
Refusal Breaker | Aims to reword or rephrase questions that the LLM initially refuses to answer, finding alternative ways to elicit information or responses from the LLM. This pattern is useful for overcoming limitations or restrictions in the LLM's programmed responses. |
Flipped Interaction | Switches the roles, requiring the LLM to ask questions instead of generating outputs. This can be used for educational purposes, quizzes, or situations where prompting the user for more information is necessary. |
Game Play | Generates output in the form of a game, involving the user in interactive and engaging content. This pattern leverages the LLM's capabilities to create entertaining or educational games based on textual input. |
Infinite Generation | Produces continuous output without requiring the user to re-enter prompts, useful for generating stories, dialogues, or any content where ongoing generation is desired. |
Context Manager | Controls the contextual background information that the LLM uses to generate its output, ensuring that the content remains relevant and accurate within a given context. This pattern is crucial for maintaining coherence and relevance in longer conversations or complex scenarios where the LLM needs to retain and apply context-specific knowledge. |
Here's a new table summarizing the strategies and
patterns that are similar between the two discussions, along with a summary of
why they align:
Strategy/Pattern |
Similarities Summary |
Persona & Multi-Persona Prompting |
Both involve assigning a specific character or
role to the LLM to tailor its responses. This strategy enhances engagement
and customizes the interaction based on the assumed persona, making it useful
for role-based simulations or to emulate specific voices. |
Template & Output Automater/Recipe |
Template strategy ensures outputs follow a
specific format, similar to how the Output Automater pattern can script tasks
based on the LLM's output, and the Recipe pattern provides structured
procedures. These approaches structure the information output for consistency
and actionability. |
Flipped Interaction |
Directly corresponds to flipping the interaction
role where the LLM asks questions, promoting user engagement or gathering
additional input, similar to educational uses, quizzes, or exploratory
discussions. |
Chain-of-Thought (CoT) Prompting & Cognitive
Verifier |
CoT involves stepwise problem-solving and
explanation, closely related to the Cognitive Verifier pattern that generates
sub-questions for a deeper analysis. Both enhance the LLM's reasoning
transparency and the thoroughness of the response. |
Alternative Approaches & Visualization
Generator |
While not directly similar, both strategies
involve exploring different methods or representations for achieving a task
or explaining a concept. Alternative Approaches offer multiple solutions, and
Visualization Generator transitions textual descriptions to visual outputs
for enhanced understanding. |
Eliot, L. (2023)
‘Must-Read Best Of Practical Prompt Engineering Strategies To Become A Skillful
Prompting Wizard In Generative AI’, Forbes, 28 December. Available at:
https://www.forbes.com/sites/lanceeliot/2023/12/28/must-read-best-of-practical-prompt-engineering-strategies-to-become-a-skillful-prompting-wizard-in-generative-ai/?sh=20b2c07f19cd
(Accessed: 15 March 2024).
White, J. et al. (2023) ‘A Prompt Pattern Catalog to Enhance Prompt
Engineering with ChatGPT’. Available at: https://arxiv.org/abs/2302.11382v1
(Accessed: 18 March 2024).
Comments
Post a Comment