Content
summary Summary

Eric Joel Horvitz is a computer scientist and director of the Microsoft Research Lab in Redmond. In a new research paper, he describes two new deepfake methods and their far-reaching risks.

In the research paper, "On the Horizon: Interactive and Compositional Deepfakes," Horvitz describes two new deepfake methods that he believes are technically possible in the future and "that we can expect to come into practice with costly implications for society."

Interactive and compositional deepfakes

"Interactive deepfakes" is what Horvitz calls multimodal deepfake clones of real people that are indistinguishable from the real person during video phone calls, for example. Current deepfake systems are mostly limited to exchanging faces - and even that offers only limited interaction possibilities. The illusion breaks down, for example, with head turns or expressive mimics.

Image: Horvitz

For interactive deepfakes, Horvitz envisions the use of a range of AI techniques for synthetic recreations of poses, facial expressions, gestures, speech, and remote control by a real person in the style of a puppeteer ("neural puppetry").

Ad
Ad

The "compositional deepfake," Horvitz says, is the compilation of a series of deepfakes that together invent a larger story, such as a terrorist attack or a scandal that never happened.

Image: Horvitz

These deepfakes could be interwoven with real events to systematically establish "synthetic histories," or falsifications of history, among subgroups of the population, for example. Horvitz calls these systems "adversarial explanation systems" that retroactively and predictively attack a communal understanding of world events.

Image: Horvitz

Fighting back interdisciplinarily - as best we can

"In the absence of mitigations, interactive and compositional deepfakes threaten to move us closer to a post-epistemic world, where fact cannot be distinguished from fiction," Horvitz writes.

As possible countermeasures, the researcher describes common considerations and systems such as educating the public about deepfake risks, regulating the technology, automatically detecting deepfakes, Red-teaming, and cryptographic proof, such as through digital watermarks.

Horvitz also advocates thinking across disciplines about how AI advances could be used for both the public good and malicious purposes to anticipate coming developments as accurately as possible.

Recommendation

"As we progress at the frontier of technological possibilities, we must continue to envision potential abuses of the technologies that we create and work to develop threat models, controls, and safeguards—and to engage across multiple sectors on rising concerns, acceptable uses, best practices, mitigations, and regulations."

Ad
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.
Ad
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.
Support our independent, free-access reporting. Any contribution helps and secures our future. Support now:
Bank transfer
Summary
  • Eric Joel Horvitz, director of the Microsoft Research Lab, describes two new deepfake risks in a recent paper.
  • Multimodal interactive deepfakes are complete digital clones of real people that can be controlled remotely in the style of a puppeteer. Compositional deepfakes describe the merging of different deepfakes to tell a larger fake story.
  • These deepfake methods could lead to "adversarial explanation systems" that attack a shared understanding of world events through synthetically generated histories in social subgroups.
Sources
Online journalist Matthias is the co-founder and publisher of THE DECODER. He believes that artificial intelligence will fundamentally change the relationship between humans and computers.
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.