AI and society

Meta CTO Andrew Bosworth thinks people will get used to deepfakes

Matthias Bastian
An editorial-style illustration similar to the previous one, but with a slight glitch vibe added. The teenage girl still appears looking skeptically at an image on a computer screen. This time, the illustration incorporates elements of digital glitch or distortion, adding an abstract, slightly surreal layer to the scene. The glitch effect subtly warps parts of the image, creating a sense of digital interference or corruption. This artistic choice enhances the theme of digital curiosity and skepticism, adding a more dynamic and contemporary feel to the illustration, while maintaining the vibrant, futuristic aesthetic of the original design.

DALL-E 3 prompted by THE DECODER

Meta CTO Andrew Bosworth does not believe in watermarking systems for AI-generated media. At the same time, he believes that people will get used to deepfakes and AI-generated content.

In an interview with The Verge, Bosworth expresses skepticism about watermarking as a solution to the problem of murky AI content or fakes.

According to Bosworth, there are very few digital things that cannot be reproduced digitally. Watermarks that prove that content is authentic or generated by AI could be dangerously subverted in either case, creating new risks.

But Meta wants to take responsibility for its role on the Internet, Bosworth makes clear. The latest image generator announced by Meta uses watermarking technology, for example.

OpenAI CEO Sam Altman is also skeptical of watermarking as an authentication method for or against AI content. OpenAI recently took its AI text detector offline due to a lack of accuracy.

A brief period of authentic multimedia

According to Bosworth, society has experienced a unique period in history when photos and videos were almost certainly real. This period lasted for about the last 50 years.

Before that, written and oral accounts of events were suspect. The same may be true in the future, but it will also apply to visual information.

Bosworth believes that society has faced and overcome similar challenges in the past. People would get used to fake media and adapt to the new reality.

"From today forward, it seems very likely that the youth of our world will know that all accounts that they come across, whether it be textual, oral, or video or photographic are suspect," says Bosworth.

The last 50 years or so would then be the historical outlier, in which it was strangely cheaper to capture real images than to generate artificial ones.

Ian Goodfellow, the inventor of GAN technology that was the driving force behind the first deepfakes, made a nearly identical comment in 2017. The last few decades have been "a little bit of a fluke, historically," in terms of the authenticity of audiovisual information, Goodfellow said. In the future, people will need to be more skeptical.

Sources: