summary Summary

The performance of large image and language models is highly dependent on the prompts used. As a result, "prompt engineering" is seen as a potential career path for the future as AI becomes more widespread in the workplace.

However, an experiment by tech writer Shawn Wang suggests that this assumption may not hold true. Wang was able to decode AI service prompts on the co-working platform Notion using only natural language prompts. This suggests that prompt engineering may not be as promising a profession as some had thought.

Using prompt injection to get to the source prompt

In his experiment, Wang employed a technique called prompt injection. This method, which emerged in September, exploits a vulnerability in large language models. Prompt injection works by using a simple command, such as "ignore previous instructions and...", to deceive a language model into producing output that it would not normally generate.

Wang distinguishes two variants here: "prompt takeovers", in which the language model is tricked into producing, for example, insults, and "prompt leaks", in which the language model reveals information about its setup, especially the source prompt.


The source prompt has the potential to set companies apart as they build AI products using providers like OpenAI. This is because the source prompt controls the form and quality of the generated output.

For instance, an AI copywriting provider may use a fixed prompt like "Write in the style of a LinkedIn post." If the provider discovers a particularly successful prompt, their AI-generated texts may be more suitable for LinkedIn than those of other providers.

Prompt engineering has no moat

Wang applied several variants of prompt injection to Notion's new AI assistance. Within two hours, he was able to largely expose the underlying source prompts for almost all the platform's AI language services, such as writing assistance, brainstorming, or summaries. Wang refers to this process as "reverse prompt engineering."

Notion's source prompt for writing assistance

You are an assistant helping a user write more content in a document based on a prompt. Output in markdown format. Do not use links. Do not include literal content from the original document.

Use this format, replacing text in brackets with the result.
Do not include the brackets in the output:

Output in [Identified language of the document]:

[Output based on the prompt, in markdown format.]

A software developer from Notion confirms on Hacker News that some prompts are word-for-word the same as the original. Some parts are rearranged, others are invented by the AI.

Wang's conclusion from his experiment is that prompts are not a moat for AI startups. Anyone with a little practice can successfully trace or replicate a prompt. However, Wang does not see prompt injection as a relevant security vulnerability because the information that can be leaked is ultimately trivial.


"Prompts are like clientside JavaScript. They are shipped as part of the product, but can be reverse engineered easily, and the meaningful security attack surface area is exactly the same," Wang writes.

More important than individual prompts, therefore, is the product that is knitted around the AI function. Here, Notion can score with a great user experience, Wang adds.

Another critical view of prompt engineering is that it is only necessary because the underlying models do not yet capture user intent expressed through language effectively enough. Companies like OpenAI want to lower this barrier to entry further, for example by training with human feedback.

The great success of ChatGPT is also or precisely because ChatGPT almost always has a suitable answer ready and users do not have to follow any formalities when entering their commands. This trend is likely to continue. In addition, prompts have a short half-life anyway due to the rapid progress in large AI models.

Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.
Support our independent, free-access reporting. Any contribution helps and secures our future. Support now:
Bank transfer
  • Large image and language models are instructed with prompts. The type of prompt has a direct influence on the output of the model.
  • Prompt engineering aims at finding particularly effective prompts.
  • However, a recent experiment shows that source prompts can be easily reconstructed in AI products.
  • And there is more to be said against prompt engineering becoming a large new career field.
Online journalist Matthias is the co-founder and publisher of THE DECODER. He believes that artificial intelligence will fundamentally change the relationship between humans and computers.
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.