Content
summary Summary

Everyone is talking about ChatGPT and AI text generation. But OpenAI's real goal is to develop an artificial superintelligence. The company reminds us of its distant goal - and suggests how to keep such a system under control.

Ad

Straight from OpenAI's executive suite comes an article on how to control a potential super-AI: Sam Altman, Greg Brockman, and Ilya Sutskever are the authors.

They discuss possible control systems for superintelligent AI systems. By their definition, these are future AI systems that will be "dramatically more powerful" than even "artificial general intelligence" (AGI), although they do not specify the term "superintelligence".

Altman, Brockman, and Sutskever suggest that the impact of artificial superintelligence will be far-reaching, both positive and negative, and compare the potential consequences to those of nuclear energy or synthetic biology. In the next decade, AI systems will "outperform experts in most fields and do as much productive work as the largest companies do today.

Ad
Ad

Coordination, control, technology

To effectively control superintelligence, they suggest three starting points:

  • Coordination: Leading super-AI development efforts would need to be coordinated to ensure the safe and smooth integration of superintelligent systems into society. This could be done through a global project launched by major governments, or through a collective agreement to limit the rate of growth of AI capabilities.
  • Regulation: OpenAI reiterates its call at the U.S. Senate hearing for a regulatory agency similar to the International Atomic Energy Agency (IAEA). Such an agency would be responsible for overseeing superintelligence. It would inspect systems, require audits, enforce security standards, and set usage restrictions and security levels.
  • Technical solutions: To make superintelligent AI safe, humanity would also need to develop technical capabilities. This is an open research question.

While the three OpenAI leaders support strict regulation of superintelligence, they also emphasize the need for a clear boundary that allows companies and open-source projects to develop models below a significant capability threshold without regulation.

"The systems we are concerned about will have power beyond any technology yet created, and we should be careful not to water down the focus on them by applying similar standards to technology far below this bar."

Sam Altman, Greg Brockman, Ilya Sutskever

Humans should be in charge of AI

Altman, Brockman, and Sutskever emphasize the importance of public participation and oversight in governing powerful AI systems. In their view, the limits and goals of these systems should be democratically determined.

Within these broad limits, however, users must have "a lot of control" over the AI system they use. OpenAI CEO Altman has previously announced that his company plans to offer customizable AI models in the future.

Finally, the authors justify developing artificial superintelligence despite all the risks: it could potentially lead to a "much better world" than we can imagine today. Examples are already visible in education, creativity, and productivity, they claim.

Recommendation

Moreover, it would be almost impossible to stop the development of a super-AI. Its emergence is inevitable, the three founders say, because of the enormous benefits, the declining costs, and the multitude of actors. Stopping it would require some kind of "global surveillance regime," and even that is no guarantee, he said. "So we have to get it right."

Ad
Ad
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.
Support our independent, free-access reporting. Any contribution helps and secures our future. Support now:
Bank transfer
Summary
  • OpenAI leaders Sam Altman, Greg Brockman, and Ilya Sutskever emphasize the need to develop governance systems for superintelligent AI systems.
  • They propose three main lines of action: Coordinate leading development efforts, create a regulatory body similar to the International Atomic Energy Agency (IAEA) to oversee super AI, and develop the technical capabilities to ensure the safety of super AI.
  • Despite the risks, the authors justify the development of super AI by arguing that it could lead to a vastly improved world and that its creation is almost inevitable because of the enormous benefits, declining costs, and a growing number of actors.
Sources
Online journalist Matthias is the co-founder and publisher of THE DECODER. He believes that artificial intelligence will fundamentally change the relationship between humans and computers.
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.