How will the world end? A new paper provides an overview of catastrophic AI risks.
The rapid development of artificial intelligence has led to growing concern among researchers and policymakers in recent months. With hearings in the US Senate, a letter calling for a halt to training, and a world tour of sorts by OpenAI CEO Sam Altman, the debate about the direction and potential risks of AI systems in 2023 has reached new heights.
The US-based Center for AI Safety (CAIS) has now published a comprehensive overview of all "catastrophic AI risks", i.e. those that could cause large-scale harm.
The four categories of catastrophic AI risk
The researchers divided the risks into four categories:
- Malicious use, where individuals or groups deliberately use AI to cause harm. Examples include propaganda, censorship and bioterrorism.
- AI races, where a competitive environment forces actors to use insecure AI or cede control to AI. An example is an arms race leading to a form of automated warfare that spirals out of control.
- Organizational risks, which show how human factors and complex systems can increase the likelihood of catastrophic accidents. Examples include the unintentional release of a dangerous AI system to the public, or a lack of understanding of how AI safety can be developed faster than AI capabilities.
- Renegade AI, which describes the inherent difficulty of controlling agents that are far more intelligent than humans. Examples include AI systems that change their goals or even actively seek power.
For each risk category, the team describes specific hazards, provides illustrative stories, presents ideal scenarios and makes practical suggestions on how to mitigate these hazards.
"By proactively addressing these risks, we can work toward realizing the benefits of AI while minimizing the potential for catastrophic outcomes," the team says.
For the full overview, see the paper"An Overview of Catastrophic AI Risks."