Content
summary Summary

Meta AI's inaccurate responses about the Trump assassination attempt once again highlight systemic flaws in AI language models, with no solution in sight.

Ad

Meta acknowledges that its AI's responses are limited to its training data. Events like the Trump incident, occurring after training, "can at times understandably create some issues," the company said.

Initially, Meta programmed its AI to avoid answering questions about the assassination attempt, instead giving a generic response about being unable to provide information. This led to the AI refusing to discuss the event at all.

In some cases, Meta AI even falsely claimed the attack hadn't happened. Meta has since updated the AI's responses but admits this should have been done more quickly.

Ad
Ad

Bullshit by design

How to solve these problems fundamentally remains unclear. "Hallucinations" or "soft bullshit" affect all generative AI systems industry-wide, as Meta points out. They're an inherent feature of the technology, not a malfunction.

AI developer Andrej Karpathy (formerly of Tesla and OpenAI) said AI-generated content is usually useful and relevant. But when wrong or misleading, it's labeled a "hallucination." "It looks like a bug, but it's just the LLM doing what it always does," he explained.

Karpathy views language models as a mix of search engines and creativity tools. While search engines only reproduce existing information, language models can create new content from training data. However, this creative ability risks producing false or misleading information. That's how these systems work.

Hidden errors

While mistakes about the Trump incident were quickly noticed due to the topic's prominence, errors in AI responses to less visible subjects likely go undetected.

This is because LLM-based information services present false information as confidently as correct facts. It's nearly impossible for users to know which answers to trust, unless they already know better or fact-check the information, which they're unlikely to do because it's inconvenient.

Recommendation

A vivid example of this is when OpenAI failed to spot an incorrect date in its own pre-recorded (!) demo video for SearchGPT. Google has had similar mishaps repeatedly, and Microsoft's AI search has spread false reports on politically sensitive topics.

Due to the systems' unreliability, both major tech companies have simply disabled some answers, such as those about elections. This shows how uncertain the companies are about the technology. Without the ChatGPT hype, they likely wouldn't have taken such social and political risks.

The loss of contextual cues about whether a source is trustworthy is also problematic. The appearance of a website, the production quality of a video, the surrounding topics - all of these can indicate the trustworthiness of an information source, but are completely hidden by chatbots like Meta AI. Every response looks the same.

There's a risk that as AI models are increasingly used for information, misinformation, especially in details, will spread more quickly and unnoticed. Companies like Meta can't currently address these systematic weaknesses in AI systems.

Ad
Ad
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.

Users should expect that AI assistants won't be a reliable alternative to traditional information sources like search engines or established media for the foreseeable future, even if they claim otherwise.

Support our independent, free-access reporting. Any contribution helps and secures our future. Support now:
Bank transfer
Summary
  • Meta AI has trouble accurately answering questions about recent events, partly because its training data is outdated.
  • When asked about the attempted attack on former President Donald Trump, the AI sometimes refused to discuss it or incorrectly stated it hadn't occurred. All generative AI models currently struggle with such "hallucinations."
  • Tech companies haven't yet figured out how to fix these fundamental flaws in AI systems. For now, users should expect that generative AI won't reliably replace traditional information sources like search engines and news media.
Sources
Online journalist Matthias is the co-founder and publisher of THE DECODER. He believes that artificial intelligence will fundamentally change the relationship between humans and computers.
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.