Ad
Skip to content

Maximilian Schreiner

Max is the managing editor of THE DECODER, bringing his background in philosophy to explore questions of consciousness and whether machines truly think or just pretend to.
Read full article about: Researchers show two words can reduce AI hallucinations

Researchers from Johns Hopkins University have found a simple technique to reduce hallucinations in large language models (LLMs) and improve the accuracy of their answers. By adding "according to" in queries, LLMs are more likely to quote observed text and provide factual information instead of fabricating answers.

A review of LLM responses using the QUIP score metric shows a 5-15% increase in the accuracy of cited information when using grounding prompts such as "According to Wikipedia...". While the technique works well across different LLMs, it is most effective with larger instruction-tuned models.

Read full article about: ChatGPT update brings useful feature known from Microsoft's Bing Chat

ChatGPT now provides contextual questions and answers in each chat, as well as suggestions for starting a new chat on various topics, such as "Explain airplane turbulence". The questions and answers are displayed above the chat box and relate to the content already generated. This is similar to the suggestions that Microsoft displays in the Bing chatbot.

No official information about the new feature has been released by OpenAI yet.

ChatGPT generates contextual questions. | Image: THE DECODER