AI and society

OpenAI's Collective Alignment team aims to make AI more democratic

Matthias Bastian
The image depicts a busy pedestrian crossing, where numerous people are walking in different directions. The photo appears to be taken from an elevated or aerial perspective, giving a top-down view of the scene. The crossing is marked by white stripes on the road, and the individuals are casting shadows, suggesting the time of day could be when the sun is at an angle, either morning or afternoon. The people are dressed in various colors, but muted tones predominate, with occasional bright colors like red and orange standing out. The image has a somewhat blurred or painted quality to it, possibly due to digital manipulation, creating an effect that is more artistic than a straightforward photograph

Midjourney prompted by THE DECODER

A new team at OpenAI is working to ensure that the diversity of perspectives and cultures is better represented in AI models.

The Collective Alignment team of scientists and engineers is tasked with developing a system for gathering feedback from the public and incorporating it into OpenAI's systems. The team will work with external consultants and funding teams to launch pilots and develop prototypes.

OpenAI's leadership has emphasized in the past that the boundaries and goals of AI systems must be democratically defined. So far, this has been limited, as ChatGPT in particular is heavily influenced by OpenAI's policies and political beliefs. Studies show that the chatbot tends to be classified as left-liberal.

First pilot projects for democratizing AI

OpenAI's "Democratic Inputs to AI" funding program aims to involve the public in decisions about AI behavior to align models with human values.

Since May 2023, OpenAI has awarded $100,000 each to ten teams from around the world to develop ideas and tools for the collective control of AI systems.

The initial pilot projects focus on various aspects of participatory engagement, including "video deliberation interfaces, platforms for crowdsourced audits of AI models, mathematical formulations of representation guarantees, and approaches to map beliefs to dimensions that can be used to fine-tune model behavior."

OpenAI summarizes the program's key findings as follows:

Public opinion can change frequently

Many teams found that public opinion changes frequently, which can affect the frequency of the contribution collection processes.

Bridging the digital divide is still difficult

Recruiting participants across the digital and cultural divide may require additional investment in better outreach and tools.

Reaching agreement between polarized groups

It can be difficult to find a compromise when a small group has strong opinions on a particular issue.

Achieving consensus versus representing diversity

Trying to reach a unified outcome or decision to represent a group can create tension between striving for consensus and appropriately representing diverse opinions.

Hopes and fears for the future of AI governance

Some participants expressed concerns about the use of AI in politics and called for transparency about when and how AI is used in democratic processes.

An overview and description of the pilot projects is available on the OpenAI website. The company also introduced new safety rules to prevent humans from using AI to manipulate democratic elections.

Sources: