AI and society

OpenAI's CEO says governments should be aware of AI training "above a certain scale"

Matthias Bastian
A portrait painting of Sam Altman, a young man with brown wispy hair and slim figure.

OpenAI

Sam Altman, co-founder of OpenAI, believes that artificial general intelligence (AGI) is possible and that his company is a pioneer in the field.

In a blog post, Altman describes his vision of artificial general intelligence and how to get there. The three key points for Altman are

Altman makes it clear in his post that he sees OpenAI's current progress as moving toward general AI ("as our systems get closer to AGI"), but also acknowledges that one cannot "exactly" predict the future and that current progress may hit a wall. A slow start with many feedback loops is, in OpenAI's view, the safest way to bring AI into society.

We currently believe the best way to successfully navigate AI deployment challenges is with a tight feedback loop of rapid learning and careful iteration. Society will face major questions about what AI systems are allowed to do, how to combat bias, how to deal with job displacement, and more.

The optimal decisions will depend on the path the technology takes, and like any new field, most expert predictions have been wrong so far. This makes planning in a vacuum very difficult.

Sam Altman / OpenAI

Less open, more safe

According to Altman, OpenAI will continue on the path it has already taken and publish only sparingly about its models. A choice that has been criticized by the scientific community.

The original idea of "open" (note: more in the sense of "open source") in the early days of OpenAI was wrong, Altman writes. Rather, he says, it's about making access to the systems and their benefits as secure as possible. Still, OpenAI plans to keep releasing open-source models, such as the recent Whisper audio-to-text model.

As the systems become more powerful, OpenAI plans to be more careful about releases. If the risk level deteriorates drastically, the current continuous deployment strategy could see a "significant change," Altman writes.

At some point, independent checks before AI training and growth limits for AI models may become necessary. This would require public standards. When training models above a certain size, "it’s important that major world governments have insight about training runs", Altman writes.

OpenAI's vision for the future is AGI

The goal of OpenAI is human-friendly AGI that advances humanity. Large language AI models such as GPT-3 and ChatGPT, which contain some of the world's knowledge by learning from millions of texts, could be an intermediate step towards more general AI systems.

Proponents of this thesis believe that large AI models will continue to develop new capabilities through more and more diverse data and greater scaling. However, this thesis is controversial among researchers. Its opponents argue that pure scaling is a dead end on the road to human-like AI, and that models must develop a fundamental understanding of the world that cannot be achieved through data training alone.

Altman supports the scaling thesis and believes it could lead to something big: "We can imagine a world in which humanity flourishes to a degree that is probably impossible for any of us to fully visualize yet. We hope to contribute to the world an AGI aligned with such flourishing."

Sources: