Prompting tips straight from the source: OpenAI shares its know-how on how to prompt.
At the heart of the prompting tips are six strategies, which OpenAI breaks down as follows.
Give clear instructions
GPT models can't read minds, so it's important to give clear instructions to get the desired result. Here are some tactics for giving clear instructions.
- Include details in the query to get more meaningful answers.
- Give the chatbot a role ("You are an expert …")
- Use delimiters (triple quotes) to identify specific parts of the query
- Specify the steps required to complete a task
- Provide examples
- Specify the desired length of output
Providing reference texts
Language models are prone to incorrect answers, especially for questions about "esoteric" topics or for quotes and URLs. Providing reference text can help reduce the number of incorrect answers. Tactics for this strategy include
- Instructing the model to respond based on a reference text
- Instructing the model to respond with quotes from a reference text
Breaking Complex Tasks into Simple Subtasks
Because complex tasks tend to have higher error rates than simpler tasks, it can be helpful to break a complex task into a series of modular components. Tactics for this strategy include
- Identifying the most relevant instruction for a user request through selection, interrogation, and clarification.
- Summarizing long documents into parts and recursively creating a complete summary
Give the model time to "think"
Models make more thinking errors when they try to answer immediately. Asking the model to form a "chain of thought" (think step-by-step) before responding can help it arrive at correct answers more reliably. Tactics for this strategy include
- Asking the model to generate its own solution before evaluating an existing one
- Using an internal monologue or series of questions to mask the model's thought processes
- Asking the model if it missed something in previous iterations