Content
summary Summary

Voters in New Hampshire received phone calls over the weekend from a presumably AI-generated voice. The voice claimed to be US President Biden and urged voters not to vote in the primary.

Ad

The voice reportedly told those called that "your vote makes a difference in November, not this Tuesday." The call was spoofed to make it appear that it was sent by a member of the Democratic committee.

The Attorney General's Office sees this as an "unlawful attempt to disrupt the New Hampshire Presidential Primary and suppress New Hampshire voters." The office has launched an investigation into the allegations.

"Although the voice in the robocall sounds like the voice of President Biden, this message appears to be artificially generated based on initial indications."

Ad
Ad

AI finds its way into political propaganda

What researchers warned about years ago seems to be coming true. There are now numerous documented cases of AI being used for political propaganda.

The risks of deepfakes in politics are many. For example, they can be used to generate videos and images, write manipulative texts en masse, or, as in the example above, generate credible fake voices. In the context of increasing audiovisual, rapid, and superficial communication in social media, this development can be considered a social risk.

Politicians warned of an onslaught of deepfakes during the last US election. Perhaps they were just one election too early: since then, deepfake technologies have become more accessible, cheaper, and better.

In California, deepfakes became a crime in 2019 if they are used to portray politicians in a way that damages their reputation during an election campaign. Social media platforms have been asked to take action against deepfakes and have introduced guidelines against deepfakes and manipulated media. Companies and researchers are working on tools to reliably identify deepfakes.

But the question remains whether the battle against ever-improving deepfakes can be won by technical means. As detection technology improves, so does generation technology. Moreover, there may come a point where deepfakes are so similar to the original that they are indistinguishable - even by the most sophisticated detectors.

Recommendation

As AI becomes more widespread, providers of AI technology are finding it increasingly difficult to control its use. For example, OpenAI, which has outlined detailed measures against the misuse of its technology in the U.S. election, recently blocked a chatbot for a Democratic candidate created by an outside vendor using its API. At the same time, OpenAI's own GPT store is flooded with chatbots imitating Donald Trump and representing his political style.

Ad
Ad
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.
Support our independent, free-access reporting. Any contribution helps and secures our future. Support now:
Bank transfer
Summary
  • Voters in New Hampshire received robocalls with a voice, presumably generated by AI, claiming to be US President Biden and urging them not to vote in the primary.
  • The fake recordings have been classified as an illegal attempt to disrupt the primary and suppress voters. The Attorney General's Office has launched an investigation.
  • Deepfakes pose a potential threat to society and politics. But it is unclear if and how they can be prevented. There are currently no technical protection systems in place.
Sources
Online journalist Matthias is the co-founder and publisher of THE DECODER. He believes that artificial intelligence will fundamentally change the relationship between humans and computers.
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.