Content
summary Summary

Google is restricting the use of its AI products for the 2024 U.S. election, which should raise questions about the reliability of AI-generated information in other areas.

Ad

Google announced limitations on AI summaries in search, YouTube live chats, image search, and Gemini apps for election-related content. The company says users need reliable, up-to-date information for federal and state elections, acknowledging that "this new technology can make mistakes as it learns or as news breaks."

Other AI providers, such as Microsoft, ChatGPT, and even Grok, also refuse to provide AI-generated answers to election questions, pointing instead to official sources. The reason seems obvious: The companies know that spreading false election information is a crime.

Still, Google's precautions raise questions about why similar restrictions don't apply to other topics. For example, Google's AI answered medical questions when AI Overviews launched. Following Google's election reasoning, they seem to be knowingly risking misinformation in other areas.

Ad
Ad

It's unclear whether Google will take responsibility for misinformation as a publisher would, or whether Google will be considered a publisher in the future. It's also unclear for which topics potential misinformation is actually okay.

Of course, this isn't just a question for Google, but for all providers of "AI answer machines" that aim to replace traditional search, such as OpenAI's SearchGPT.

Context is crucial

Misinformation isn't the only problem. When AI replies in a chat window, the context of the original source is lost. This can lead to manipulated information, even if the AI is quoting correctly because the intent of a text often needs to be understood in its original context.

For example, I recently asked Perplexity for an answer about a particular degree program. The answer included a very positive quote from a well-known media brand. But that quote actually came from a paid advertisement on the media brand's site. It was tagged as an advertisement there, but that was lost when the AI scraped the text and rephrased it for its answer.

While this problem may be solvable for a few well-known media brands, it's nearly impossible to solve for the entire Internet of millions of publishers and blogs.

Recommendation

In addition, Perplexity CEO Aravind Srinivas recently described how the language models behind AI search engines are vulnerable to simple attacks such as hidden instructions in website text.

Perplexity, unlike its competitors, isn't restricting AI-generated content for US elections. Instead, the company claims to prioritize citing reliable sources, encouraging users to verify answers using provided references.

Who checks sources anyway?

This strategy seems straightforward, but it hinges on a crucial assumption: that users will actually click through to check sources. But such diligent fact-checking behavior is unlikely.

If users thoroughly verified every AI response, it would negate the time-saving benefits of using an AI assistant. This verification process could easily take longer than directly browsing a few trusted news sites.

Ad
Ad
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.

Perplexity could easily validate its approach by publishing click-through rates for cited sources. This data, which the company likely has, would provide concrete evidence of users' verification habits. However, Perplexity has not shared this information.

All of this highlights a fundamental issue in AI-based information retrieval: balancing convenience, accuracy, and responsible use. As AI systems become more pervasive, the challenge of encouraging critical thinking and source verification without sacrificing usability will become increasingly important.

A June 2024 study by the Reuters Institute shows that chatbots can do both, providing accurate information and spreading misinformation about election issues.

The Institute notes, "Users may not always notice these errors (and we often only did when we checked the answers claim by claim), given the authoritative tone of these systems and how they provide a single answer instead of a list of results. […] While all systems provide small disclaimers about potential inaccuracies, we can't say how much attention people pay to these and if so, how this affects their perception."

The Institute also points out that chatbots are currently used by few people to get news and are typically just one of many sources of information, limiting their potential for harm.

Meanwhile, Meta AI and OpenAI just announced rapidly growing user numbers, with OpenAI reaching 200 million weekly active users for ChatGPT and Meta reaching 400 million monthly active users for Meta AI.

Ad
Ad
Support our independent, free-access reporting. Any contribution helps and secures our future. Support now:
Bank transfer
Summary
  • Google is restricting the use of its AI products for the 2024 U.S. election, acknowledging that AI can make mistakes and spread misinformation. The company is limiting AI summaries in search, YouTube live chats, image search, and Gemini apps for election-related content.
  • Other AI providers, such as Microsoft, ChatGPT, and Grok, are also refusing to provide AI-generated answers to election questions, instead pointing users to official sources.
  • Google's precautions, however, raise the question of why similar restrictions don't apply to other topics, such as medical information.
Sources
Online journalist Matthias is the co-founder and publisher of THE DECODER. He believes that artificial intelligence will fundamentally change the relationship between humans and computers.
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.