Content
summary Summary

A group of former OpenAI employees, researchers, and nonprofit organizations is urging regulators to block OpenAI's proposed corporate restructuring, arguing it threatens the company's founding mission to develop artificial general intelligence (AGI) for the benefit of humanity.

Ad

In an open letter, the group opposes the planned transfer of operational control from a nonprofit entity to a public benefit corporation (PBC), stating that the shift would prioritize shareholder interests over public accountability and safety.

Prominent signatories include AI pioneer and Nobel laureate Geoffrey Hinton (University of Toronto), AI ethics researcher Margaret Mitchell (Hugging Face), and computer scientist Stuart Russell (University of California, Berkeley). The letter is also supported by former OpenAI employees, faculty from Harvard, UCLA, and Columbia, and organizations such as the Center for Humane Technology.

Legal safeguards at risk under new structure

OpenAI was founded in 2015 as a nonprofit, motivated by concerns that for-profit companies might develop AGI without sufficient alignment with the public good. In 2019, OpenAI created a for-profit subsidiary, OpenAI LP, to raise capital while retaining nonprofit oversight. Under this hybrid model, profits were capped, an independent board was instituted, and ownership of AGI technology remained with the nonprofit entity.

Ad
Ad

The proposed restructuring would shift operational control to a new PBC, leaving the nonprofit as a shareholder without governing authority. According to the letter, this would eliminate key legal protections that currently prioritize OpenAI's mission over profit:

Governance Safeguards Today Proposed Restructuring
1. Profit motives are subordinate to charitable purpose Yes No
2. Leadership has a fiduciary duty to advance the charitable purpose, enforceable by the attorneys general Yes No
3. Investor profits are capped, with above-cap profits owned by the nonprofit Yes Rumored no
4. Majority independent board commitment Yes Unknown
5. AGI, when developed, belongs to the nonprofit for the benefit of humanity Yes No by default
6. Stop-and-assist commitment from Charter Yes Unknown

The authors write: "The proposed restructuring threatens all of these safeguards, and therefore should be stopped."

The letter emphasizes that OpenAI's nonprofit structure is not symbolic, but a legally binding mechanism that ensures its mission remains paramount. The group urges the attorneys general of California and Delaware—who oversee nonprofits registered in their states—to investigate the proposed changes.

Concerns over AGI control and Microsoft access

The letter further criticizes the lack of transparency around who would control AGI technology under the new structure, or whether Microsoft—OpenAI's largest investor—would gain access to it. The authors also raise concerns that the "stop-and-assist" clause from OpenAI's charter, which requires the company to support other safety-aligned AGI projects if they reach AGI first, would no longer be enforceable.

According to the letter, OpenAI's restructuring "would benefit OpenAI's shareholders at the public's expense." They argue it would affect how AGI is developed, who profits from it, and who ultimately controls it.

Recommendation

"Whether OpenAI invests the time and resources to ensure its AI systems are safe and beneficial to humanity or recklessly races forward for competitive and financial gain" is at stake, the authors claim.

The authors also challenge OpenAI's rationale for the restructuring. OpenAI has cited the need for a simplified capital structure to remain competitive with other well-funded AI firms. But the letter disputes this reasoning, noting that OpenAI's competitors are not bound by fiduciary duties to prioritize public welfare, unlike its current nonprofit structure.

"It [OpenAI] cites the fact that investor interests are subordinate to the mission as a problem that needs to be solved, but that's exactly what cannot change without subverting the mission," the letter states.

OpenAI has also claimed the reorganization would allow the nonprofit to become "one of the best resourced non-profits in history." However, the authors believe that the nonprofit would only receive the fair market value of its interest in the for-profit arm and would lose control over AGI governance.

Ad
Ad
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.

Former employees echo concerns in court filing

These concerns are not limited to the open letter. Twelve former OpenAI employees previously submitted a letter supporting Elon Musk's lawsuit against the company, echoing many of the same criticisms. Their filing alleges that OpenAI used its nonprofit status to attract safety-oriented researchers while internally preparing for a commercial pivot—one of several points that align closely with the arguments raised in the open letter.

In a sworn statement, former researcher Todor Markov accused CEO Sam Altman of enforcing restrictive confidentiality agreements and misrepresenting the company's long-term intentions. According to the group, the planned shift to a PBC would represent a fundamental breach of OpenAI's founding mission.

If the restructuring does not proceed, OpenAI may face significant financial challenges. Some funding from its current investment round is reportedly contingent on completing the reorganization by the end of the year. Without it, OpenAI risks losing investor support and could struggle to raise the billions needed for computing infrastructure, model training, and staffing.

Support our independent, free-access reporting. Any contribution helps and secures our future. Support now:
Bank transfer
Summary
  • An open letter signed by prominent AI researchers, including Geoffrey Hinton, Margaret Mitchell, and Stuart Russell, warns that OpenAI's proposed move to become a public benefit corporation threatens its commitment to serving the public good.
  • The letter argues that this restructuring would eliminate legally binding requirements to prioritize public benefit, remove investor profit caps, and weaken oversight of advanced AI technology.
  • Similar worries have been raised by former OpenAI employees, who have accused CEO Sam Altman of using nondisclosure agreements to enable internal changes aimed at increasing profits.
Sources
Matthias is the co-founder and publisher of THE DECODER, exploring how AI is fundamentally changing the relationship between humans and computers.
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.