ChatGPT is generating false information that's influencing political discussions, reports Elizabeth Lopatto at The Verge. Several journalists and commentators have fallen for incorrect information provided by the AI chatbot.
Ana Navarro-Cardenas claimed on X that President Woodrow Wilson had pardoned his brother-in-law Hunter deButts. According to Lopatto, there is no trace of a Wilson brother-in-law named deButts: "I can't prove Wilson didn't pardon a Hunter deButts; I can only tell you that if he did, that person was not his brother-in-law."
Navarro-Cardenas is known as a commentator on CNN and The View. Esquire and other media outlets spreaded similar misinformation, stating that George H.W. Bush pardoned his son Neil and Jimmy Carter pardoned his brother Billy. While the source of these claims remains unclear, Navarro-Cardenas cited ChatGPT as her source.
Hey Twitter sleuths, thanks for taking the time to provide context. Take it up with Chat GPT...???? https://t.co/4OfMtb09xL pic.twitter.com/TiM2CNkPDw
- Ana Navarro-Cárdenas (@ananavarro) December 3, 2024
When asked about presidential pardons, the chatbot consistently provides false information, according to Lopatto. Google Gemini and Perplexity also return incorrect answers at times, with these systems generating unverified facts.
Blind trust in AI tools raises concerns
Lopatto points out the dangers of people trusting AI systems blindly instead of checking primary sources. Most users don't question the responses they receive, she suspects.
This is a fundamental design flaw in these systems, she writes, which fails to take human behavior into account. In its current form, she concludes, generative AI is unsuitable as a research tool. Noting the "large-scale degradation of our information environment," Lopatto asks: "What good is an answer machine that nobody can trust?"
Study confirms ChatGPT's search limitations
A new study from Columbia University's Tow Center for Digital Journalism supports these concerns. Researchers analyzed ChatGPT's search function and found serious flaws. Out of 200 tested news citations, 153 contained incorrect or partially incorrect source attributions. Even content from OpenAI's publishing partners was often misquoted or misrepresented. In some cases, ChatGPT linked to plagiarized articles instead of original sources.
The study concludes that publishers currently have no way to ensure their content is accurately represented in ChatGPT, regardless of whether they partner with OpenAI. The researchers stress the urgent need to protect the integrity of the information ecosystem.
Correction: The old version of the article said "a person who, according to Lopatto, never existed". This has been changed to more accurately reflect what Lopatto wrote in her article.