Content
summary Summary

Leading AI researchers and industry leaders issue an open statement calling for extreme AI risks to be taken as seriously as nuclear war.

Ad

"Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war," urges a statement released today by the Center for AI Safety.

The statement was signed by the CEOs of leading AI companies, including OpenAI, Google Deepmind, and Anthropic, as well as leading AI scientists in the U.S. and China.

"A historic coalition"

"This represents a historic coalition of AI experts — along with philosophers, ethicists, legal scholars, economists, physicists, political scientists, pandemic scientists, nuclear scientists, and climate scientists — establishing the risk of extinction from advanced, future AI systems as one of the world’s most important problems," a statement from the center said.

Ad
Ad

The negative impacts of AI are already being felt and need to be addressed. But it is also necessary to anticipate the future risks of advanced AI systems, they said.

"Talking about how AI could lead to the end of humanity can feel like science fiction," said Charlotte Siegmann, co-founder of KIRA, a think tank that studies AI risks. "But there are many reasons to be concerned: abuse, competition, ignorance, carelessness, and the unresolved controllability of current and future systems. The open letter underlines this."

Yann LeCun and Meta missing from the list

Notable signatories of the statement include:

  • CEOs of top AI labs: Sam Altman, Demis Hassabis, and Dario Amodei
  • The authors of the standard textbook on Artificial Intelligence (Stuart Russell and Peter Norvig)
  • Two authors of the standard textbook on Deep Learning (Ian Goodfellow and Yoshua Bengio)
  • An author of the standard textbook on Reinforcement Learning (Andrew Barto)
  • Three Turing Award winners (Geoffrey Hinton, Yoshua Bengio, and Martin Hellman)
  • Executives from Microsoft, OpenAI, Google, Google DeepMind, and Anthropic—Meta has not signed
  • The scientists behind famous AI systems such as AlphaGo and every version of GPT (David Silver, Ilya Sutskever)
  • The top two most cited computer scientists (Hinton and Bengio), and the most cited scholar in computer security and privacy (Dawn Song)
  • AI professors from Chinese universities
  • Professors who study pandemics, climate change, and nuclear technology
  • Other signatories include Marian Rogers Croak (inventor of VoIP–Voice over Internet Protocol), Kersti Kaljulaid (Former President of the Republic of Estonia), and more

One Turing Prize winner is missing: Meta's AI chief Yann LeCun did not sign the letter, and Meta itself is missing from the list. The company currently has an open-source policy and is behind the powerful LLaMA models, for example. Also currently missing is Gary Marcus, who recently testified in the U.S. Senate in favor of more regulation of AI research and companies.

One of the signatories is Connor Leahy, CEO of Conjecture, a company dedicated to applied and scalable alignment research. In a recent episode of Machine Learning Street Talk, he explains in detail why AI poses an existential risk and how we might manage it.

Ad
Ad
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.
Recommendation
Support our independent, free-access reporting. Any contribution helps and secures our future. Support now:
Bank transfer
Summary
  • Leading AI companies and researchers have issued an open statement calling for AI risk mitigation to be a global priority, comparable to other global risks such as pandemics and nuclear war.
  • Signatories include the CEOs of OpenAI, Google DeepMind, and Anthropic, as well as prominent AI scientists from the US and China.
  • Meta and Meta's AI chief Yann LeCun did not sign the statement.
Sources
Max is managing editor at THE DECODER. As a trained philosopher, he deals with consciousness, AI, and the question of whether machines can really think or just pretend to.
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.