The European Commission has issued guidelines for very large online platform providers (VLOPs) and very large online search engines (VLOSEs) to mitigate systemic risks to electoral processes.
The guidelines include specific guidance and proposals to mitigate risks that may arise from generative AI. They are based on Regulation (EU) 2022/2065 (Digital Services Act or DSA).
Examples of generative AI risks cited by the European Commission include deceiving voters or manipulating electoral processes by creating false and misleading synthetic content about political actors, and misrepresenting events, polls, contexts, or narratives.
Generative AI systems can also produce false, incoherent, or fabricated information, affectionately called "hallucinations" in AI lingo, which can misrepresent reality and potentially mislead voters.
AI content needs (better) labeling
To mitigate the risks associated with generative AI, providers should ensure that content generated by GenAI systems is identifiable to users, for example through watermarking.
Providers should also give users standard interfaces and easy-to-use tools for tagging AI-generated content. These identifiers should be easily recognizable to users, including in advertising.
Meta announced such a feature for its social platforms just a few days ago. Most major AI companies have also agreed to the C2PA standard for image tagging.
The EU also wants providers to ensure that AI-generated information is based on reliable sources as much as possible, and to alert people to potential errors in the generated content. The generation of inaccurate content should be minimized.
In addition, public media literacy should be strengthened, and providers should engage with relevant national authorities and other local stakeholders to escalate election-related issues and discuss solutions.
The document emphasizes the critical role of journalists and media providers with "well-established internal editorial standards and procedures." The availability of trustworthy information from pluralistic sources is important for a functioning electoral process, the EU said.
A bad example of generative AI in the context of elections is Microsoft's Bing Chat, which was criticized for providing false information about the upcoming elections in Germany and Switzerland. The AI chatbot misled users with inaccurate poll results and false names of party candidates.
While Microsoft claims to be improving its service and says it has made significant progress in improving the accuracy of Bing Chat's (now: Copilot) responses, these cases highlight the structural problems with generative AI providing critical information.