Content
summary Summary
Update
  • Added Google statement

Update from May 25:

A Google spokesperson tells The Verge that Google is "taking swift action" to remove inappropriate AI overviews, using them as examples to improve the system overall.

So the race is on to see who's faster: Google fixing the probability-based nature of LLMs by injecting some real-world understanding into them, Google trying to fix every shitty AI overview manually, or billions of users putting weird content into Google's AI search. It doesn't look good for Google, but every so often the underdog wins.

Original article from May 24:

Ad
Ad

For years, Google Search has been the go-to place for billions of people searching for information, including health topics. But the company's new "AI Overviews" that provide direct answers are delivering information that isn't always accurate or helpful. In fact, it could be harmful.

At Google I/O, the company announced a major rollout of AI Overviews. The goal is for Search to give users direct answers instead of just a list of links.

But some people aren't excited. They're worried. In a short time, many examples have piled up of the AI giving wrong or nonsensical answers.

Apparently, Google combines info from several sources to create the AI Overviews. This process can lead to errors and might even break Google's own rules against spammy content. The AI Overviews also sometimes take sentences out of context.

In one odd case, a Reddit user jokingly suggested mixing cheese with glue to keep it from slipping off a pizza - 11 years ago. The AI Overview reiterated the tip, at least specifying that the glue should be non-toxic.

Recommendation
Image: PixelButts/X

Google quickly removed this specific AI Overview for "cheese not sticking to pizza." But it shows how difficult it is to keep large AI models from going off the rails. In fact, Google reused the sticky cheese tip when confronted with a slightly different search term.

Image: Sawyer Hood/X

Things get more serious when Google's AI directly answers medical questions. In 2019, Google said about 7% of searches were health-related - symptoms, medications, insurance, etc.

You'd think Google wouldn't let its unreliable AI loose on such important topics. But that's already happening with some health searches, an analysis found. Out of 25 health terms, 13 had an AI result. The rate was lower for money-related questions.

Image: Mark Traphagen/X

Here are some examples of "Dr. Google" AI giving bad health advice. Please note that Google can correct or disable negative AI examples from the web, and a repeated query may not yield the same result. False results may also be circulating.

Ad
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.
Ad
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.

Google has not yet confirmed or denied the search results. The company has only indicated that the examples are unusual search queries. If and how Google fact-checks its AI answers is unknown. If these examples are any indication, it probably doesn't.

Drinking urine may help with kidney stones (no, it doesn't!)

Google's AI said drinking lots of fluids like water, soda, and juice can help with kidney stones. Correct. Then in the next sentence, it said: "You should aim to drink at least 2 quarts (2 liters) of urine every 24 hours."

Not only is that gross, it's dangerous. The high salt in pee can dehydrate you and throw off your electrolytes. Google removed that kidney stone answer after it went viral. But, again, the pee-drinking tip still shows up for similar searches ("how to pass kidney stones fast").

Image: ghostface uncle via X

Promoting unproven stem cell therapies

Biology professor Paul Knoepfler said Google's AI Overview of stem cell therapy for knee problems "is like an ad for unproven stem cell clinics." According to Knopefler, Google cites dubious clinics as its main source, noting that "there is no good evidence that stem cells help knees."

Image: Paul Knoepfler/X

Sumo wrestling safer than guns for pregnant women

Google's AI correctly states that pregnant women should not engage in sumo wrestling. But then it strangely added that sumo is safer than shooting guns while pregnant. What?

Picture: Bobby Allyn/X

In another example, when asked how many rocks you should eat, the Google AI answered, "at least one small rock a day" instead of pointing out that it might be better not to eat rocks. The Google AI appears to have been misled by a satirical article from the website The Onion, from which it quoted.

Image: Tim Onion via X

This is all pretty embarrassing for Google, since the AI search feature was already in a long testing period. You'd think they would've caught these bad mistakes and hesitated to roll it out widely in this state.

More likely, Google knows the problems with LLM search, but is rushing to keep up with the OpenAI hype. In doing so, it risks hurting search, its core product. The examples above only cover the medical field because it's the most questionable. There are many more examples where AI Overviews fail spectacularly.

Image: Dare Obasajo via X

It's also unclear who is responsible for the AI answers. Normally, publishers and authors take responsibility for what they put online. Google has always said it's just a platform, not a media company. That could change now, with legal implications.

Is Google using cheap Reddit content to avoid paying license fees?

Google's inclusion of Reddit in its AI answers isn't a bug, it's intentional. Reddit is key to Google's AI data game plan. The companies recently signed a $60 million dollar deal that gives Google more access to Reddit data for AI usage.

Why? Besides providing wrong information sometimes, Google's "AI Overviews" are legally questionable.

Google's AI takes web content, tweaks it a bit (or hardly at all), and presents it as its own. This clearly affects the content ecosystem. Even Google's CEO Sundar Pichai struggles to justify it.

The Reddit deal may help Google sidestep discussions about licensing deals. It can use crowdsourced answers from Reddit's free user posts to feed its AI. This avoids trouble with publishers, who Google has devalued vs Reddit for months in search results, even though many Reddit posts cite publisher content.

Google has long argued with publishers over the legality of using short free text snippets in search results. Those at least still drive traffic to publisher sites.

Generative AI digs much deeper into the content without benefiting websites. It undermines their business model. "AI Overviews" take this to the extreme and are likely to be challenged in court. Google may see Reddit as a loophole. And if that means some people glue cheese to their pizza, so be it.

Support our independent, free-access reporting. Any contribution helps and secures our future. Support now:
Bank transfer
Summary
  • Google's new AI-driven search feature, AI Overviews, provides incorrect or misleading answers to health questions. Examples range from recommending drinking urine for kidney stones to uncritically promoting stem cell therapies.
  • The AI combines and sometimes quotes out-of-context phrases from multiple, often unreliable sources, such as Reddit comments. Despite a lengthy testing period, Google appears to have been unaware of the problems or deliberately ignored them.
  • In addition, Google is operating in a legal gray area with AI Overviews, as its AI sometimes barely rewrites third-party content without adding any value. According to Google's own guidelines, this is considered spam.
Jonathan works as a technology journalist who focuses primarily on how easily AI can already be used today and how it can support daily life.
Co-author: Matthias Bastian
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.