Content
newsletter Newsletter

More than 1,000 experts and public figures are calling for a ban on developing superintelligent AI until there is "broad scientific consensus that it will be done safely and controllably" and "strong public buy-in."

Ad

According to the statement, leading AI companies are racing to build systems that could outthink humans at nearly every cognitive task within "the coming decade." The signatories warn of major risks: economic upheaval, loss of human control, threats to civil liberties, and even the potential extinction of humanity.

Some of the field's biggest names have signed on, including Turing Award winners Geoffrey Hinton and Yoshua Bengio, who are among the world's most cited AI researchers. According to Bengio, new AI systems should be "fundamentally incapable of harming people," either through unintended behavior or misuse. Stuart Russell, co-author of Artificial Intelligence: A Modern Approach, says this time the group isn't calling for "a ban or even a moratorium in the usual sense," but simply making a "proposal to require adequate safety measures."

Support stretches well beyond academia. Apple co-founder Steve Wozniak, Virgin founder Richard Branson, former US National Security Advisor Susan Rice, and Prince Harry and Meghan are among the signers. Historian Yuval Noah Harari warns that superintelligence could destroy the "operating system of human civilization." Even Prince Harry has something to say about AI: "The future of AI should serve humanity, not replace it."

Ad
Ad

The push comes from the Future of Life Institute, which called for a six-month pause on advanced AI research in an open letter in early 2023. The letter made headlines and sparked debate, but in the end, it had no impact on the pace of the industry.

Deep divides in the AI community

The statement exposes just how divided the field has become. Some focus on existential risks, while others see those warnings as exaggerated or even strategic.

Yann LeCun, chief AI scientist at Meta, recently criticized Anthropic CEO Dario Amodei, who warns about AGI dangers in public while building the same kinds of systems himself. LeCun called this stance "intellectually dishonest," arguing that today's large language models are overrated and headed for a technological dead end.

Stuart Russell also warns about a possible AI bubble: inflated expectations could trigger a sudden loss of confidence, much like the dot-com bust. Even OpenAI chairman Bret Taylor sees strong echoes between today's AI boom and the dotcom era.

Ad
Ad
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.
Support our independent, free-access reporting. Any contribution helps and secures our future. Support now:
Bank transfer
Matthias is the co-founder and publisher of THE DECODER, exploring how AI is fundamentally changing the relationship between humans and computers.
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.