Content
summary Summary

In a recent interview with The Verge, Microsoft founder Bill Gates made comments about AI's potential to spread false information that miss the core of the problem.

Ad

Gates said: "People can type misinformation into a word processor. They don’t need AI, you know, to type out crazy things. And so I’m not sure that, other than creating deepfakes, AI really changes the balance there."

While Gates is technically correct, he fails to address the main concern. The primary risk of AI-generated text isn't just that false information can be created – it's the potential for massive scaling.

Two studies on AI's persuasive abilities have shown that AI models can be more convincing than humans. In one study, GPT-4 with access to personal information increased agreement with opposing arguments by 81.7 percent compared to human debates.

Ad
Ad

OpenAI CEO Sam Altman has warned about the "superhuman persuasion" of AI models "which may lead to some very strange outcomes." Security researchers fear, for example, much more convincing phishing emails.

AI makes disinformation more effective and scalable

AI disinformation can be done much faster, on a much larger scale, at a high human level, and possibly even at a superhuman level in the future. This is certainly not the same as humans simply typing "misinformation into a word processor."

Gates' argument misses the point, much like those who downplay the risks of deepfakes by saying that images and videos have always been manipulable, citing Photoshop as an example.

Again, it's not about the technical ability to create fake content - it's about simplicity and accessibility, scalability. There's a big difference between having to buy and learn professional software to create a politically motivated fake image and being able to do it with a single sentence on X.

Given that figures like Donald Trump and Elon Musk are fulfilling the deepfake scenarios that some researchers predicted years ago, it would be helpful if influential people like Gates made more nuanced statements about the dangers of AI-generated disinformation.

Recommendation

There may be arguments that the disinformation potential of AI is overstated. But Gates' argument isn't one of them. It's an inaccurate comparison that distracts from and trivializes the real problem.

Ad
Ad
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.
Support our independent, free-access reporting. Any contribution helps and secures our future. Support now:
Bank transfer
Summary
  • In a recent interview, Microsoft founder Bill Gates downplayed the potential for AI to spread disinformation, arguing that people can already type false information into a word processor without the need for AI.
  • However, Gates' argument misses the main concern, which is not simply the ability to create disinformation, but the potential for AI to massively scale it. Studies have shown that AI models can be more persuasive than humans, increasing agreement with opposing arguments by up to 81.7 percent.
  • The real danger of AI-generated disinformation lies in its ability to create deceptive content faster, on a larger scale, and potentially more effectively than humans. Gates' comparison to typing disinformation into a word processor fails to capture the simplicity, accessibility, and scalability of AI-driven disinformation.
Sources
Online journalist Matthias is the co-founder and publisher of THE DECODER. He believes that artificial intelligence will fundamentally change the relationship between humans and computers.
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.