Content
summary Summary

OpenAI sent two million ChatGPT requests about US election results to verified news sources rather than letting its AI system answer directly. This decision raises broader questions about AI systems' ability to provide reliable information across all topics.

Ad

The company said it took strict security measures during the 2024 U.S. election, directing users to established outlets like Associated Press and Reuters instead of providing direct answers.

OpenAI reported about two million ChatGPT queries related to election results on election day and the following day. In the month leading up to the election, the system also redirected approximately one million queries about the voting process to the official CanIVote.org website.

OpenAI emphasized that ChatGPT was programmed not to express political preferences or voting recommendations, even when directly asked.

Ad
Ad

However, the company left politically oriented CustomGPTs active for weeks before the election, including ones that wrote X-posts mimicking Donald Trump's style. OpenAI has not responded to repeated and multiple questions about how it moderates these political CustomGPTs.

While three million ChatGPT election queries and four million views on Perplexity's election page may seem significant, major news outlets like CNN see much higher traffic, not to mention Google's billions of daily searches.

But the election metric here is just a snapshot; what matters is how fast these AI services grow in the news space. And OpenAI needs ChatGPT Search to grow because it has invested hundreds of millions in select news publishers, setting up a classic prisoner's dilemma for the news industry. It's a big bet.

Blocking political AI image slop

In addition to redirecting election queries, OpenAI says it blocked more than 250,000 attempts to generate DALL-E images of politicians like President Biden, Donald Trump, and Vice President Harris in the month before the election.

Blocking DALL-E from generating political motifs shows a commitment to preventing abuse, but its impact is limited given the widespread availability of AI image generators with fewer restrictions, especially when President-elect Trump himself is using fake AI images for his campaign.

Recommendation

Spreading misinformation is not just an election issue

OpenAI's decision to redirect election queries shows that it is aware of AI's potential to spread misinformation. But the company's transparency about accuracy stops there, raising questions about ChatGPT's reliability across topics.

When asked repeatedly about error rates in general search results, OpenAI provided only vague responses, saying it "continuously looks to improve" its handling of incorrect information across all its models.

The problem of AI systems providing incorrect information goes well beyond election coverage. These errors (also known as "hallucinations" or just "soft bullshit") happen across all topics, yet OpenAI isn't willing to share if and how it measures accuracy rates in regular searches. The company is also not transparent about how it compiles and paraphrases information from sources, or how it prioritizes them.

Industry-wide silence on error rates

Other major AI search providers, including Perplexity and Google, have also remained silent when asked about their systems' accuracy rates. This widespread reluctance to answer such a basic question suggests that AI search companies would rather avoid measuring error than openly address it. The silence is particularly striking given the growing role of these systems in providing information to millions of users.

Ad
Ad
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.

And it leaves a fundamental question unanswered: How often do these AI systems provide incorrect information across all types of queries? Without transparency about error rates, users can't make informed decisions about when to trust AI-generated answers.

Support our independent, free-access reporting. Any contribution helps and secures our future. Support now:
Bank transfer
Summary
  • OpenAI redirected two million ChatGPT queries about the U.S. election results to verified news sources such as Associated Press and Reuters instead of providing direct answers, because ChatGPT could provide misinformation if it did.
  • In the month leading up to the election, OpenAI also redirected approximately one million ChatGPT queries about the voting process to the official CanIVote.org website and blocked more than 250,000 attempts to generate DALL-E images of politicians such as President Biden, Donald Trump, and Vice President Harris.
  • While OpenAI's caution with election information shows an awareness of potential misinformation risks, the company has been less transparent about ChatGPT's accuracy in general searches. But spreading misinformation is not just an election issue.
Sources
Online journalist Matthias is the co-founder and publisher of THE DECODER. He believes that artificial intelligence will fundamentally change the relationship between humans and computers.
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.