Content
summary Summary

Researchers argue that the falsehoods generated by ChatGPT and other large language models are better described as "bullshit" rather than hallucinations. These AI systems are indifferent to the truth when generating text.

Ad

When large language models (LLMs) like OpenAI's ChatGPT make false claims, they are not "hallucinating" or lying - they are spreading bullshit. This is the conclusion reached by researchers from the University of Glasgow in a recent paper published in the journal Ethics and Information Technology.

The researchers base their analysis on philosopher Harry Frankfurt's definition of the term "bullshit". According to Frankfurt, bullshit is characterized not by an intent to deceive, but by a reckless disregard for the truth.

Someone who spreads bullshit is not necessarily trying to deceive others, but is simply indifferent to whether what they are saying is true or false. The researchers argue that this is precisely the case with LLMs.

Ad
Ad

Lies, bullshit, and hallucinations

Conventional wisdom holds that someone is lying when they make a claim they believe to be false, intending to make others believe it is true.

Bullshit, on the other hand, does not require the speaker to believe what he or she is saying. The "bullshitter" doesn't care about the truth and may not even be trying to deceive us, either about the facts or about their own beliefs.

Researchers distinguish between two kinds of bullshit: "hard bullshit," in which the speaker tries to deceive the listener about his or her intentions, and "soft bullshit," in which the speaker has no such intentions but simply doesn't care about the truth.

Hallucinations, in contrast, are misguided perceptions of something that does not exist. For LLMs, this metaphor is inappropriate: the systems do not perceive anything or attempt to communicate beliefs or perceptions.

ChatGPT spreads "soft bullshit"

The researchers argue that ChatGPT and other LLMs clearly spread "soft bullshit". The systems are not designed to produce truthful statements, but rather to produce text that is indistinguishable from human writing. They prioritize persuasiveness over correctness.

Recommendation

Whether ChatGPT also produces "hard bullshit" is a more difficult question, depending on whether one can attribute intentions to the system. This hinges on complex issues such as whether ChatGPT has beliefs or internal representations intended to represent truth.

However, for ChatGPT to be classified as a hard bullshitter, it may be enough that the system is designed to pretend to tell the truth, thereby deceiving the audience about its goals and intentions. And one could argue that OpenAI's inclusion of the phrase "ChatGPT may make mistakes. Consider checking important information" at the end of each chat is not enough to dispel this claim.

The danger of calling bullshit "hallucinations"

Describing AI bullshit as "hallucinations" is not harmless, the authors warn. It contributes to overestimating the capabilities of these systems and suggests solutions to accuracy problems that may not exist.

Calling the systems "bullshitters" is more accurate and represents good science and tech communication in a field that "sorely needs it," the researchers conclude.

Ad
Ad
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.

"The machines are not trying to communicate something they believe or perceive. Their inaccuracy is not due to misperception or hallucination. As we have pointed out, they are not trying to convey information at all. They are bullshitting."

After reading the study, ChatGPT itself agrees: "Framing the discussion around the outputs of language models as "bullshit" in the philosophical sense can help clarify their capabilities and limitations, fostering a more accurate understanding of how these models function and what to expect from them."

Support our independent, free-access reporting. Any contribution helps and secures our future. Support now:
Bank transfer
Summary
  • Researchers at the University of Glasgow argue that the falsehoods of ChatGPT and other large language models are better described as "bullshit" rather than "hallucinations." According to philosopher Harry Frankfurt, bullshit is characterized by an indifferent attitude toward truth.
  • Researchers distinguish between "hard bullshit", where the speaker is trying to deceive about his intentions, and "soft bullshit", where the speaker is simply indifferent to the truth. ChatGPT clearly spreads "soft bullshit" because it is designed to be convincingly false.
  • The term "hallucinations" for false AI statements is problematic because it overestimates the capabilities of the systems and suggests that there may be solutions. The researchers advocate using the term "bullshit" as a more accurate description to promote better scientific communication in the field.
Sources
Online journalist Matthias is the co-founder and publisher of THE DECODER. He believes that artificial intelligence will fundamentally change the relationship between humans and computers.
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.