Ad
Skip to content

Media personalities stumble as ChatGPT invents presidential pardon history

Image description
Ideogram prompted by THE DECODER

ChatGPT is generating false information that's influencing political discussions, reports Elizabeth Lopatto at The Verge. Several journalists and commentators have fallen for incorrect information provided by the AI chatbot.

Ana Navarro-Cardenas claimed on X that President Woodrow Wilson had pardoned his brother-in-law Hunter deButts. According to Lopatto, there is no trace of a Wilson brother-in-law named deButts: "I can't prove Wilson didn't pardon a Hunter deButts; I can only tell you that if he did, that person was not his brother-in-law."

Navarro-Cardenas is known as a commentator on CNN and The View. Esquire and other media outlets spreaded similar misinformation, stating that George H.W. Bush pardoned his son Neil and Jimmy Carter pardoned his brother Billy. While the source of these claims remains unclear, Navarro-Cardenas cited ChatGPT as her source.

- Ana Navarro-Cárdenas (@ananavarro) December 3, 2024

When asked about presidential pardons, the chatbot consistently provides false information, according to Lopatto. Google Gemini and Perplexity also return incorrect answers at times, with these systems generating unverified facts.

Blind trust in AI tools raises concerns

Lopatto points out the dangers of people trusting AI systems blindly instead of checking primary sources. Most users don't question the responses they receive, she suspects.

Ad
DEC_D_Incontent-2

This is a fundamental design flaw in these systems, she writes, which fails to take human behavior into account. In its current form, she concludes, generative AI is unsuitable as a research tool. Noting the "large-scale degradation of our information environment," Lopatto asks: "What good is an answer machine that nobody can trust?"

Study confirms ChatGPT's search limitations

A new study from Columbia University's Tow Center for Digital Journalism supports these concerns. Researchers analyzed ChatGPT's search function and found serious flaws. Out of 200 tested news citations, 153 contained incorrect or partially incorrect source attributions. Even content from OpenAI's publishing partners was often misquoted or misrepresented. In some cases, ChatGPT linked to plagiarized articles instead of original sources.

The study concludes that publishers currently have no way to ensure their content is accurately represented in ChatGPT, regardless of whether they partner with OpenAI. The researchers stress the urgent need to protect the integrity of the information ecosystem.

Correction: The old version of the article said "a person who, according to Lopatto, never existed". This has been changed to more accurately reflect what Lopatto wrote in her article.

AI News Without the Hype – Curated by Humans

As a THE DECODER subscriber, you get ad-free reading, our weekly AI newsletter, the exclusive "AI Radar" Frontier Report 6× per year, access to comments, and our complete archive.

AI news without the hype
Curated by humans.

  • Over 20 percent launch discount.
  • Read without distractions – no Google ads.
  • Access to comments and community discussions.
  • Weekly AI newsletter.
  • 6 times a year: “AI Radar” – deep dives on key AI topics.
  • Up to 25 % off on KI Pro online events.
  • Access to our full ten-year archive.
  • Get the latest AI news from The Decoder.
Subscribe to The Decoder