AI and society

OpenAI bosses share some thoughts on how to control a superintelligence

Matthias Bastian
Humans walk across a street intersection, artistic style, AI generated art.

Midjourney prompted by THE DECODER

Everyone is talking about ChatGPT and AI text generation. But OpenAI's real goal is to develop an artificial superintelligence. The company reminds us of its distant goal - and suggests how to keep such a system under control.

Straight from OpenAI's executive suite comes an article on how to control a potential super-AI: Sam Altman, Greg Brockman, and Ilya Sutskever are the authors.

They discuss possible control systems for superintelligent AI systems. By their definition, these are future AI systems that will be "dramatically more powerful" than even "artificial general intelligence" (AGI), although they do not specify the term "superintelligence".

Altman, Brockman, and Sutskever suggest that the impact of artificial superintelligence will be far-reaching, both positive and negative, and compare the potential consequences to those of nuclear energy or synthetic biology. In the next decade, AI systems will "outperform experts in most fields and do as much productive work as the largest companies do today.

Coordination, control, technology

To effectively control superintelligence, they suggest three starting points:

While the three OpenAI leaders support strict regulation of superintelligence, they also emphasize the need for a clear boundary that allows companies and open-source projects to develop models below a significant capability threshold without regulation.

"The systems we are concerned about will have power beyond any technology yet created, and we should be careful not to water down the focus on them by applying similar standards to technology far below this bar."

Sam Altman, Greg Brockman, Ilya Sutskever

Humans should be in charge of AI

Altman, Brockman, and Sutskever emphasize the importance of public participation and oversight in governing powerful AI systems. In their view, the limits and goals of these systems should be democratically determined.

Within these broad limits, however, users must have "a lot of control" over the AI system they use. OpenAI CEO Altman has previously announced that his company plans to offer customizable AI models in the future.

Finally, the authors justify developing artificial superintelligence despite all the risks: it could potentially lead to a "much better world" than we can imagine today. Examples are already visible in education, creativity, and productivity, they claim.

Moreover, it would be almost impossible to stop the development of a super-AI. Its emergence is inevitable, the three founders say, because of the enormous benefits, the declining costs, and the multitude of actors. Stopping it would require some kind of "global surveillance regime," and even that is no guarantee, he said. "So we have to get it right."

Sources: