Ad
Skip to content

U.S. Senators demand details on OpenAI's safety practices and working conditions by August 2024

Image description
OpenAI (YouTube Screenshot)

Key Points

  • A group of U.S. senators has sent a letter to OpenAI CEO Sam Altman demanding that he disclose detailed information about the company's security practices and employment conditions by August 13, 2024.
  • This comes in response to media reports initiated by former employees who have acted as whistleblowers highlighting potential security risks at OpenAI, including the departures of prominent AI safety researchers, security vulnerabilities, and employee concerns.
  • The Senators emphasize that the public must be able to trust that OpenAI is safely developing its systems in terms of governance integrity, security testing, employment practices, adherence to public promises, and cybersecurity policies.

A group of U.S. Senators has sent a letter to OpenAI CEO Sam Altman asking for detailed information about the company's safety practices and working conditions by August 13, 2024.

The letter is a response to media reports, some from former employees acting as whistleblowers, that highlight potential safety risks at OpenAI. The New York Times, Vox, and the Washington Post have reported on the departures of several AI safety researchers, vulnerabilities that led to a hack, and employee concerns about OpenAI's safety practices.

Some researchers, such as former super AI safety chief Jan Leike, have publicly voiced harsh criticism, stating that OpenAI prioritizes product releases over safety.

The new AI model, GPT-4o, was reportedly rushed through safety testing in just one week. It can be manipulated to generate malicious content, such as bomb-making instructions, simply by phrasing prompts in the past tense.

Ad
DEC_D_Incontent-1

The Senators note that OpenAI has publicly committed to the safe and responsible development of AI and has made commitments to the Biden Administration to do so. OpenAI is also working with U.S. agencies to develop AI-based cybersecurity tools to protect critical infrastructure.

The public must be able to trust that OpenAI is developing its systems safely, the senators say. This applies to the integrity of its corporate governance, security testing, hiring practices, adherence to public promises, and cybersecurity policies.

Possibly in response to the letter, OpenAI has released several statements via X, referring to the recently established Safety and Security Committee, the leaked and now officially confirmed five levels of AGI, its Preparedness Framework, and the process of revising the much-criticized employment contracts that restricted the rights of former employees acting as whistleblowers.

Ad
DEC_D_Incontent-2

AI News Without the Hype – Curated by Humans

As a THE DECODER subscriber, you get ad-free reading, our weekly AI newsletter, the exclusive "AI Radar" Frontier Report 6× per year, access to comments, and our complete archive.