Content
summary Summary

A new study from Columbia University's Tow Center for Digital Journalism shows that ChatGPT's search function has serious accuracy problems, including fabricating sources and misquoting content - even from publishers that have agreements with OpenAI.

Ad

The researchers tested 200 news citations from 20 different publishers and found that ChatGPT provided wrong or partially incorrect source information in 153 cases. This included both OpenAI partner publishers and those that block ChatGPT from accessing their content.

The study found that ChatGPT only admitted to not finding a source in seven cases. "Eager to please, the chatbot would sooner conjure a response out of thin air than admit it could not access an answer," the researchers noted. What's more concerning is that ChatGPT presents these false sources with complete confidence, without indicating any uncertainty.

The issues affect even publishers with direct OpenAI partnerships and approved content access. The New York Post and The Atlantic, for example, often saw their content misquoted or misrepresented by the AI system.

Ad
Ad

ChatGPT cites copied content

In some instances, ChatGPT pointed to copied content instead of original sources. For one New York Times article, the system linked to a website that had copied the entire piece without attribution.

"As a publisher, that’s not something you want to see," said Mat Honan, editor-in-chief of MIT Technology Review. "But there’s so little recourse."

When asked about these findings, OpenAI responded by highlighting that ChatGPT directs 250 million users to high-quality content weekly. The company said it's working with partners to improve citation accuracy.

The researchers concluded that publishers currently have no way to ensure ChatGPT Search displays their content correctly - whether they work with OpenAI or not.

The complete study results are available on GitHub.

Ad
Ad
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.
Recommendation
Support our independent, free-access reporting. Any contribution helps and secures our future. Support now:
Bank transfer
Summary
  • A study by the Tow Center for Digital Journalism reveals serious flaws in ChatGPT's search function. In 153 out of 200 news citations tested, the chatbot provided incorrect or partially incorrect source information, even for content from OpenAI partner publishers.
  • Rather than admit that it does not have access to a piece of information, ChatGPT prefers to make up an answer, and even presents false sources with a high degree of conviction. In some cases, the chatbot even referred to plagiarized content instead of the original sources.
  • According to the researchers, publishers currently have no way of ensuring that their content is presented correctly in ChatGPT's search - whether they work with OpenAI or not. OpenAI was evasive about the findings of the study.
Sources
Max is managing editor at THE DECODER. As a trained philosopher, he deals with consciousness, AI, and the question of whether machines can really think or just pretend to.
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.