AI and society

Existential risk: AI elite calls for global prioritization of AI risks

Maximilian Schreiner

Midjourney prompted by THE DECODER

Leading AI researchers and industry leaders issue an open statement calling for extreme AI risks to be taken as seriously as nuclear war.

"Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war," urges a statement released today by the Center for AI Safety.

The statement was signed by the CEOs of leading AI companies, including OpenAI, Google Deepmind, and Anthropic, as well as leading AI scientists in the U.S. and China.

"A historic coalition"

"This represents a historic coalition of AI experts — along with philosophers, ethicists, legal scholars, economists, physicists, political scientists, pandemic scientists, nuclear scientists, and climate scientists — establishing the risk of extinction from advanced, future AI systems as one of the world’s most important problems," a statement from the center said.

The negative impacts of AI are already being felt and need to be addressed. But it is also necessary to anticipate the future risks of advanced AI systems, they said.

"Talking about how AI could lead to the end of humanity can feel like science fiction," said Charlotte Siegmann, co-founder of KIRA, a think tank that studies AI risks. "But there are many reasons to be concerned: abuse, competition, ignorance, carelessness, and the unresolved controllability of current and future systems. The open letter underlines this."

Yann LeCun and Meta missing from the list

Notable signatories of the statement include:

One Turing Prize winner is missing: Meta's AI chief Yann LeCun did not sign the letter, and Meta itself is missing from the list. The company currently has an open-source policy and is behind the powerful LLaMA models, for example. Also currently missing is Gary Marcus, who recently testified in the U.S. Senate in favor of more regulation of AI research and companies.

One of the signatories is Connor Leahy, CEO of Conjecture, a company dedicated to applied and scalable alignment research. In a recent episode of Machine Learning Street Talk, he explains in detail why AI poses an existential risk and how we might manage it.

Sources: