With a new disclosure requirement, Google is preparing for AI-generated content in election ads.
According to Google, starting in mid-November, political advertisers will have to "prominently disclose" whether and how they are using "synthetic content that inauthentically depicts real or realistic-looking people or events."
The disclosure must appear in a place where it is most likely to be noticed by users. The new rule applies to images, videos, and audio files.
Google is exempting minor changes that are not relevant to the ad's message from the new rule. According to Google, these include "editing techniques such as image resizing, cropping, color or brightening corrections, defect correction (for example, “red eye” removal), or background edits that do not create realistic depictions of actual events."
Google prepares for deepfakes in US election campaign
As examples of ads subject to the new transparency requirement, Google cites "an ad with synthetic content that makes it appear that a person said or did something they did not say or do" - which covers the whole deepfake complex.
An ad with altered footage of a real event or a realistic portrayal of a fictional event must also be disclosed. These are just examples; other forms of advertising that require disclosure are possible.
The first examples of AI-assisted propaganda in politics show that these rules are urgently needed. In particular, the fake photo of Donald Trump in handcuffs circulated in social and editorial media.
There are other examples of questionable use of AI in politics, such as Toronto mayoral candidate Anthony Furey's use of AI to generate streetscapes of Toronto showing people without shelter. The deputy leader of the far-right AfD parliamentary group, Norbert Kleinwächter, used AI to generate images of angry-looking men with dark skin and eyes and to portray political opponents.