Content
summary Summary

ChatGPT is generating false information that's influencing political discussions, reports Elizabeth Lopatto at The Verge. Several journalists and commentators have fallen for incorrect information provided by the AI chatbot.

Ad

Ana Navarro-Cardenas claimed on X that President Woodrow Wilson had pardoned his brother-in-law Hunter deButts. According to Lopatto, there is no trace of a Wilson brother-in-law named deButts: "I can't prove Wilson didn't pardon a Hunter deButts; I can only tell you that if he did, that person was not his brother-in-law."

Navarro-Cardenas is known as a commentator on CNN and The View. Esquire and other media outlets spreaded similar misinformation, stating that George H.W. Bush pardoned his son Neil and Jimmy Carter pardoned his brother Billy. While the source of these claims remains unclear, Navarro-Cardenas cited ChatGPT as her source.

Ad
Ad

When asked about presidential pardons, the chatbot consistently provides false information, according to Lopatto. Google Gemini and Perplexity also return incorrect answers at times, with these systems generating unverified facts.

Blind trust in AI tools raises concerns

Lopatto points out the dangers of people trusting AI systems blindly instead of checking primary sources. Most users don't question the responses they receive, she suspects.

This is a fundamental design flaw in these systems, she writes, which fails to take human behavior into account. In its current form, she concludes, generative AI is unsuitable as a research tool. Noting the "large-scale degradation of our information environment," Lopatto asks: "What good is an answer machine that nobody can trust?"

Study confirms ChatGPT's search limitations

A new study from Columbia University's Tow Center for Digital Journalism supports these concerns. Researchers analyzed ChatGPT's search function and found serious flaws. Out of 200 tested news citations, 153 contained incorrect or partially incorrect source attributions. Even content from OpenAI's publishing partners was often misquoted or misrepresented. In some cases, ChatGPT linked to plagiarized articles instead of original sources.

The study concludes that publishers currently have no way to ensure their content is accurately represented in ChatGPT, regardless of whether they partner with OpenAI. The researchers stress the urgent need to protect the integrity of the information ecosystem.

Recommendation

Correction: The old version of the article said "a person who, according to Lopatto, never existed". This has been changed to more accurately reflect what Lopatto wrote in her article.

Ad
Ad
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.
Support our independent, free-access reporting. Any contribution helps and secures our future. Support now:
Bank transfer
Summary
  • ChatGPT is providing false information about presidential pardons that's spreading through media channels, as CNN commentator Ana Navarro-Cardenas and several journalists cited non-existent pardons involving Woodrow Wilson, George H.W. Bush, and Jimmy Carter.
  • A test by The Verge showed that ChatGPT consistently generates incorrect information about presidential pardons, with similar issues found in Google Gemini and Perplexity AI responses, highlighting widespread reliability problems in AI systems.
  • Research from Columbia University's Tow Center found that ChatGPT's search function produced incorrect or partially incorrect source attributions in 153 out of 200 tested news citations, including misquoted content from OpenAI's publishing partners.
Sources
Max is managing editor at THE DECODER. As a trained philosopher, he deals with consciousness, AI, and the question of whether machines can really think or just pretend to.
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.