The 2024 US elections are coming up. Some say AI could play a role - mostly in manipulation and misinformation. OpenAI is getting ready.
OpenAI wants to prevent the potential misuse of its AI tools, especially in light of the upcoming US elections. According to a blog post, this includes misleading "deepfakes," large-scale influence operations, and chatbots that mimic candidates.
The company has now announced several safety measures and revised its ChatGPT and API usage guidelines to reduce the risk of abuse.
Beginning in early 2024, DALL-E 3 images will be tagged with an invisible C2PA-standard watermark. This will provide robust encryption of the source information and reliably reveal whether an image was generated using OpenAI technology.
The standard is being driven by the Content Authenticity Initiative (CAI), which includes AI players like Adobe and camera makers like Sony, Canon, and Leica. Google unveiled its own solution to the same issue last year with Synth ID.
OpenAI is also experimenting with a provenance classifier, a new tool for recognizing DALL-E-generated images that the company says has shown promising results in internal tests.
Initially, the tool will be made available to journalists, platforms, and researchers to get feedback. This cautious approach may be inspired by experience: OpenAI had to shut down its text classification tool in July 2023 due to a lack of reliability.
ChatGPT will display news from reliable and trusted sources
ChatGPT will be more closely linked to real-time news coverage, including sources and links. This transparency should help voters evaluate information and decide whom to trust.
This maneuver can also be viewed as an attempt to drive traffic back to the media, with the New York Times lawsuit still looming. OpenAI has signed its first contracts with Axel Springer and Associated Press, with more to come.
In its blog post, OpenAI explains the specific restrictions on political communication via ChatGPT. For example, OpenAI does not allow use for political campaigning and lobbying.
There have been several cases in the past where AI-generated images have been used for propaganda purposes, and text generated by ChatGPT can also be quite persuasive. OpenAI considers the persuasiveness of AI systems to be a key risk factor of AI, with OpenAI CEO Sam Altman stating that he expects AI to be capable of superhuman persuasion long before superhuman intelligence.
OpenAI bans chatbots that impersonate real people, such as candidates or institutions. In addition, OpenAI prohibits any applications of its technologies that "deter people from participation in democratic processes."
Through the recently launched GPT Store, users can access third-party chatbots for free. If they find offensive content, they can report it using a dedicated button.
In the U.S., ChatGPT will work with the National Association of Secretaries of State (NASS) to direct users to the CanIVote.org resource site for certain election-related questions. Lessons learned from this collaboration will inform OpenAI's approach in other countries and regions.
A year ago, OpenAI and partners published a paper describing the potential societal risks of large language models in the context of disinformation and AI propaganda in detail. Together with Georgetown University's Center for Security and Emerging Technology and the Stanford Internet Observatory, OpenAI developed a framework for preventive options against AI-powered disinformation campaigns.