AI in practice

Elon Musk's Grok AI chatbot turns viral X jokes into fake news

Matthias Bastian

DALL-E 3 prompted by THE DECODER

Update on April 20, 2024:

Grok misinterpreted user posts about NBA player Klay Thompson's poor shooting performance in a game. Thompson had missed all ten of his shots in an NBA game.

X users then metaphorically said that Thompson was "throwing bricks" in terms of missed shots. Grok turned this into a story about Thompson damaging homes in Sacramento with bricks, and placed the false report at number five on the trending list.

The chatbot made up numerous details, including that windows had been broken, that authorities were already investigating the case and that Thompson had not yet commented on it, that the incident had shocked people, and that there were no injuries.

Image: Sam Rutherford via Engadget

X does point out that the Grok-based news feature can generate errors. Whether this can legally be sufficient justification for lies about celebrities and other fake news is unlikely.

Original article from April 15, 2024:

X chatbot Grok uses posts from X users to generate supposedly first-hand news about current events. While Musk touts this as a great advantage, it is exactly what leads to bizarre fake news stories.

For example, X users posted jokes about why the sun "disappeared" during last week's solar eclipse. Grok aggregated these jokes into a report claiming that experts were surprised by the sun's "odd behavior" - a completely absurd claim, since solar eclipses are a predictable astronomical phenomenon and it was known that they would occur.

X users make jokes about the solar eclipse, the Grok bot turns it into news. | Image: via Gizmodo

In another case, Grok fell for a joke by X users who claimed that the mayor of New York would send thousands of police into subway stations to shoot the earthquake before it happened again.

Image: Brett via X

It's not just harmless jokes that Grok misunderstands. The chatbot also invents false chronologies on topics like the war between Israel and Palestine, and lends credibility to abstruse theories like the "Pizzagate" conspiracy.

In one instance, Grok generated the headline "Iran attacks Tel Aviv with heavy missiles," apparently based on speculation by X users about a possible attack. In reality, there was no Iranian attack on Israel at the time, contrary to Grok's false claim.

When asked about the "Pizzagate" conspiracy theory, which implicates high-profile politicians like Hillary Clinton in a child pornography ring, Grok was evasive, suggesting that while there is no concrete evidence, there are "strange coincidences" that feed conspiracy theories - a phrase that implies a kernel of truth despite the facts.

Musk's hasty and reckless rollout of Grok at X exposes the weaknesses of generative AI on autopilot: it can sound very convincing without being remotely grounded in reality. Grok also shows why companies like Google and OpenAI are so concerned about safety guidelines - because they still have a respect for facts and truth.