Content
summary Summary

Prompting tips straight from the source: OpenAI shares its know-how on how to prompt.

At the heart of the prompting tips are six strategies, which OpenAI breaks down as follows.

Give clear instructions

GPT models can't read minds, so it's important to give clear instructions to get the desired result. Here are some tactics for giving clear instructions.

  • Include details in the query to get more meaningful answers.
  • Give the chatbot a role ("You are an expert …")
  • Use delimiters (triple quotes) to identify specific parts of the query
  • Specify the steps required to complete a task
  • Provide examples
  • Specify the desired length of output

Providing reference texts

Language models are prone to incorrect answers, especially for questions about "esoteric" topics or for quotes and URLs. Providing reference text can help reduce the number of incorrect answers. Tactics for this strategy include

Ad
Ad
  • Instructing the model to respond based on a reference text
  • Instructing the model to respond with quotes from a reference text

Breaking Complex Tasks into Simple Subtasks

Because complex tasks tend to have higher error rates than simpler tasks, it can be helpful to break a complex task into a series of modular components. Tactics for this strategy include

  • Identifying the most relevant instruction for a user request through selection, interrogation, and clarification.
  • Summarizing long documents into parts and recursively creating a complete summary

Give the model time to "think"

Models make more thinking errors when they try to answer immediately. Asking the model to form a "chain of thought" (think step-by-step) before responding can help it arrive at correct answers more reliably. Tactics for this strategy include

  • Asking the model to generate its own solution before evaluating an existing one
  • Using an internal monologue or series of questions to mask the model's thought processes
  • Asking the model if it missed something in previous iterations

SYSTEM
Follow these steps to answer the user queries.

Step 1 - First work out your own solution to the problem. Don't rely on the student's solution since it may be incorrect. Enclose all your work for this step within triple quotes (""").

Step 2 - Compare your solution to the student's solution and evaluate if the student's solution is correct or not. Enclose all your work for this step within triple quotes (""").

Step 3 - If the student made a mistake, determine what hint you could give the student without giving away the answer. Enclose all your work for this step within triple quotes (""").

Step 4 - If the student made a mistake, provide the hint from the previous step to the student (outside of triple quotes). Instead of writing "Step 4 - ..." write "Hint:".

USER
Problem Statement: <insert problem statement>

Student Solution: <insert student solution>

Example prompt from OpenAI for an inner monologue

OpenAI does not mention more unusual prompt additions such as "Take a deep breath," or the tactic of emotionally pressuring the chatbot in this context.

Using external tools

The typical weaknesses of large language models can be compensated by using other tools, such as text search systems or code execution programs. So-called language models with tools are potentially much more powerful than pure language models. Tactics of this strategy include

  • Using embedded search to implement efficient knowledge retrieval
  • Using code execution to perform more precise calculations or call external APIs
  • Model access to specific functions

In addition, OpenAI recommends evaluating frequently used prompts through targeted evaluations rather than relying solely on gut feelings to assess quality. These evaluations should reflect actual usage, include many test cases, and be easy to automate or repeat, OpenAI recommends.

The results can be evaluated by computers, humans, or a mixture of both. OpenAI offers an open-source software called Evals for this task.

The OpenAI Prompting Guide contains prompt examples for all the scenarios described above. More basic tips and prompt ideas can be found in our article on ChatGPT prompt strategies.

Ad
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.
Ad
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.
Support our independent, free-access reporting. Any contribution helps and secures our future. Support now:
Bank transfer
Summary
  • OpenAI publishes prompting tips divided into six strategies to get better results from AI language models and ChatGPT.
  • The strategies include giving clear instructions, providing reference text, breaking complex tasks into subtasks, giving the model time to "think," using external tools, and performing targeted evaluations.
  • These tips should help improve the performance of AI language models and help users get more effective and accurate responses from the models.
Sources
Online journalist Matthias is the co-founder and publisher of THE DECODER. He believes that artificial intelligence will fundamentally change the relationship between humans and computers.
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.