Ad
Skip to content

OpenAI's Collective Alignment team aims to make AI more democratic

Image description
Midjourney prompted by THE DECODER

At a Glance

  • OpenAI has established a "Collective Alignment" team to better represent the diversity of perspectives and cultures in AI models and to incorporate feedback from the public.
  • The Democratic Inputs to AI grant program will support ten teams worldwide with $100,000 each to develop ideas and tools for collective governance of AI systems.
  • Initial pilot projects will focus on participatory engagement, such as "video deliberation interfaces, platforms for crowdsourced audits of AI models, mathematical formulations of representation guarantees, and approaches to map beliefs to dimensions that can be used to fine-tune model behavior."

A new team at OpenAI is working to ensure that the diversity of perspectives and cultures is better represented in AI models.

The Collective Alignment team of scientists and engineers is tasked with developing a system for gathering feedback from the public and incorporating it into OpenAI's systems. The team will work with external consultants and funding teams to launch pilots and develop prototypes.

OpenAI's leadership has emphasized in the past that the boundaries and goals of AI systems must be democratically defined. So far, this has been limited, as ChatGPT in particular is heavily influenced by OpenAI's policies and political beliefs. Studies show that the chatbot tends to be classified as left-liberal.

First pilot projects for democratizing AI

OpenAI's "Democratic Inputs to AI" funding program aims to involve the public in decisions about AI behavior to align models with human values.

Ad
DEC_D_Incontent-1

Since May 2023, OpenAI has awarded $100,000 each to ten teams from around the world to develop ideas and tools for the collective control of AI systems.

The initial pilot projects focus on various aspects of participatory engagement, including "video deliberation interfaces, platforms for crowdsourced audits of AI models, mathematical formulations of representation guarantees, and approaches to map beliefs to dimensions that can be used to fine-tune model behavior."

OpenAI summarizes the program's key findings as follows:

Public opinion can change frequently

Ad
DEC_D_Incontent-2

Many teams found that public opinion changes frequently, which can affect the frequency of the contribution collection processes.

Bridging the digital divide is still difficult

Recruiting participants across the digital and cultural divide may require additional investment in better outreach and tools.

Reaching agreement between polarized groups

It can be difficult to find a compromise when a small group has strong opinions on a particular issue.

Achieving consensus versus representing diversity

Trying to reach a unified outcome or decision to represent a group can create tension between striving for consensus and appropriately representing diverse opinions.

Hopes and fears for the future of AI governance

Some participants expressed concerns about the use of AI in politics and called for transparency about when and how AI is used in democratic processes.

An overview and description of the pilot projects is available on the OpenAI website. The company also introduced new safety rules to prevent humans from using AI to manipulate democratic elections.

AI News Without the Hype – Curated by Humans

As a THE DECODER subscriber, you get ad-free reading, our weekly AI newsletter, the exclusive "AI Radar" Frontier Report 6× per year, access to comments, and our complete archive.

Source: OpenAI