Content
summary Summary

A group of current and former employees of top AI companies such as (mostly) OpenAI, Google and Anthropic have published an open letter. They warn that advanced AI poses serious risks and call for more oversight of the industry.

Ad

"[…] we believe in the potential of AI technology to deliver unprecedented benefits to humanity," the letter says, but "we also understand the serious risks posed by these technologies."

The risks, according to the letter, include exacerbating current inequalities, manipulation and misinformation, and loss of control of autonomous AI systems that could wipe out humanity.

The letter says AI companies have strong financial reasons to avoid real oversight, and the way the companies are run is not enough to change that.

Ad
Ad

"AI companies possess substantial non-public information about the capabilities and limitations of their systems, the adequacy of their protective measures, and the risk levels of different kinds of harm," the signatories say.

"However, they currently have only weak obligations to share some of this information with governments, and none with civil society. We do not think they can all be relied upon to share it voluntarily."

The letter calls for whistleblower protections in the AI sector. As long as there is no real government oversight of AI companies, current and former employees are some of the only people who can hold these companies accountable to the public, the letter says.

"Ordinary whistleblower protections are insufficient because they focus on illegal activity, whereas many of the risks we are concerned about are not yet regulated," they write, which is also why six current and former OpenAI employees signed the letter anonymously. Five former OpenAI employees signed the letter by name.

The letter asks AI companies to agree to:

Recommendation
  • Do not enter into or enforce agreements that prohibit criticism of the company on risk matters, and do not punish risk-related criticism by preventing financial gain.
  • Provide an anonymous and verifiable way for current and former employees to raise risk concerns with the company's board of directors, regulators, and an appropriate independent group with relevant expertise.
  • Support a culture of open criticism and allow current and former employees to raise risk concerns about the company's technology with the public, the board, regulators, or an appropriate independent group with relevant expertise, as long as trade secrets and other intellectual property rights are properly protected.
  • Do not retaliate against current or former employees who have shared risk-related confidential information after other steps have failed.

Three icons of AI research, Geoffrey Hinton, Yoshua Bengio, and Stuart Russell, support the open letter. But their ideas about the dangers of AI, especially Hinton's existential ones, are not without controversy in the industry. Other prominent AI researchers, such as Yann LeCun, criticize them for fearmongering.

OpenAI remains at the center of the AI safety debate

OpenAI has been criticized for gag clauses that prevent departing employees from saying negative things about it. The company says it is in the process of eliminating these clauses.

OpenAI also disbanded its long-term AI risk team just a year after it was created, after many researchers in the field left the company. Team leader Jan Leike explained his resignation by saying that OpenAI's safety culture and processes had fallen behind "shiny products." He has since joined competitor Anthropic.

The firing and later rehiring of Sam Altman in November 2023 was ultimately also based on safety concerns, which those responsible for the firing justified with Altman's alleged character flaws. But the current OpenAI board and investigations by a law firm have cleared Altman.

Ad
Ad
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.

"We agree that rigorous debate is crucial given the significance of this technology and we’ll continue to engage with governments, civil society and other communities around the world," an OpenAI spokesperson told CNBC.

The company has established an anonymous integrity hotline and recently formed a new Safety and Security Committee, of which CEO Sam Altman is a member.

Support our independent, free-access reporting. Any contribution helps and secures our future. Support now:
Bank transfer
Summary
  • A group of current and former employees of leading AI companies, including OpenAI, Google, and Anthropic, warn in an open letter of the risks of advanced AI, including the cementing of inequality, manipulation, disinformation, and potential loss of control.
  • They call for greater government oversight of the industry, as AI companies have strong financial incentives to evade oversight and disclose little non-public information about their systems.
  • The signatories also call for better whistleblower protections in the AI industry, noting that current protections focus on illegal activity while many of the risks remain unregulated.
Sources
Online journalist Matthias is the co-founder and publisher of THE DECODER. He believes that artificial intelligence will fundamentally change the relationship between humans and computers.
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.