Content
summary Summary

The non-profit Future of Life Institute focuses on the fundamental threats to human existence. Now it has published an open letter calling for an AI pause. The letter is signed by several prominent figures in business and science.

Ad

For at least six months, all AI labs should stop developing AI systems more powerful than GPT-4, the letter says, referring to ongoing projects as "giant AI experiments."

The letter is signed by business leaders such as Elon Musk, Steve Wozniak, and Stablity-AI (Stable Diffusion) co-founder Emad Mostaque, as well as many notable AI researchers such as Turing Prize winner Yoshua Bengio, Berkeley AI professor Stuart Russell, and language model critic Gary Marcus.

Three researchers from Google's AI sister company Deepmind and a Google developer also signed the letter. So far, no one from OpenAI is among them. In total, the open letter currently has 1125 signatories.

Ad
Ad

Open Letter to OpenAI and Microsoft

The letter is addressed to "all AI labs" to stop working on more powerful AI systems than GPT-4. But the undertone is primarily aimed at the aggressive deployment of large language models by Microsoft and OpenAI. In a matter of weeks, both companies have brought AI technology to many millions of people and integrated large language models into numerous standard applications such as Search or Office.

The Institute cites a lack of planning and management in the current spread of AI technology. The AI race is out of control, it says, and even its creators cannot "understand, predict, or reliably control" the impact of "powerful digital minds."

The letter cites common negative scenarios such as AI propaganda, the automation of many jobs, the displacement of human intelligence, and the eventual loss of control over our own civilization as possible risks.

"Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable," the letter says.

More rules for AI

AI labs and independent experts should use the pause of at least six months to develop common security guidelines that will be overlooked by outside experts. These guidelines should ensure that systems "are safe beyond a reasonable doubt."

Recommendation

At the same time, the institute calls for coordination with legislators, which would at least mean that AI systems can be monitored, controlled and tracked by a dedicated department, for example through a watermarking system.

The institute is not calling for a general halt to AI development, but "merely a stepping back from the dangerous race to ever-larger unpredictable black-box models with emergent capabilities."

"Society has hit pause on other technologies with potentially catastrophic effects on society. We can do so here," the institute writes.

Ad
Ad
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.
Support our independent, free-access reporting. Any contribution helps and secures our future. Support now:
Bank transfer
Summary
  • The Future of Life Institute addresses the fundamental risks to human existence.
  • It now calls for a pause in the development of AI systems more powerful than GPT-4.
  • The pause should last at least six months and be used to discuss fundamental questions about the use of AI and to jointly develop safety guidelines.
Online journalist Matthias is the co-founder and publisher of THE DECODER. He believes that artificial intelligence will fundamentally change the relationship between humans and computers.
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.