Ad
Skip to content

Researchers show two words can reduce AI hallucinations

Researchers from Johns Hopkins University have found a simple technique to reduce hallucinations in large language models (LLMs) and improve the accuracy of their answers. By adding "according to" in queries, LLMs are more likely to quote observed text and provide factual information instead of fabricating answers.

A review of LLM responses using the QUIP score metric shows a 5-15% increase in the accuracy of cited information when using grounding prompts such as "According to Wikipedia...". While the technique works well across different LLMs, it is most effective with larger instruction-tuned models.

AI News Without the Hype – Curated by Humans

As a THE DECODER subscriber, you get ad-free reading, our weekly AI newsletter, the exclusive "AI Radar" Frontier Report 6× per year, access to comments, and our complete archive.

AI news without the hype
Curated by humans.

  • Over 20 percent launch discount.
  • Read without distractions – no Google ads.
  • Access to comments and community discussions.
  • Weekly AI newsletter.
  • 6 times a year: “AI Radar” – deep dives on key AI topics.
  • Up to 25 % off on KI Pro online events.
  • Access to our full ten-year archive.
  • Get the latest AI news from The Decoder.
Subscribe to The Decoder