OpenAI, the company behind ChatGPT and numerous other commercial AI applications, has long pursued the goal of developing artificial general intelligence (AGI) that "benefits all of humanity." Now, the organization is significantly revising its perspective on how this transformative technology might emerge.
In a recent blog post, OpenAI explains that it no longer expects artificial general intelligence to arrive as a dramatic breakthrough moment where AI systems suddenly achieve superhuman capabilities. Instead, the company now views AGI development as a continuous process marked by steady, incremental improvements.
OpenAI's changing perspective on AGI has also led to a more practical approach to safety. Instead of planning for theoretical future scenarios, the company now wants to learn from hands-on experience with today's AI systems. Of course, this aligns well with their commercial strategy.
The company aims to build safety measures that grow stronger alongside advancing AI capabilities. Recently, they unveiled a new safety system that can scale up as AI becomes more powerful, potentially even to AGI-level capabilities.
Throughout all of this, OpenAI maintains that keeping humans in control is crucial. They believe society should have a say in how AI systems behave and what values they reflect. To make this possible, they're building tools that help people communicate clearly with AI systems and stay in control, even when dealing with AIs that might become more capable than humans in certain areas.
Learning from past caution
OpenAI now views some of its earlier safety decisions differently. The company points to its 2019 decision to temporarily withhold GPT-2 from public release as an example of "outsized caution" that stemmed from viewing AGI as a sudden breakthrough. Back then, the company only released a smaller version of the model "due to concerns about large language models being used to generate deceptive, biased, or abusive language at scale."
However, Miles Brundage, former Head of Policy Research at OpenAI, challenges OpenAI's new interpretation of GPT-2's withhold release back then. He argues that GPT-2's gradual release already reflected the philosophy of iterative development, and many safety experts appreciated this careful approach given the information available at the time.
AGI is what makes a ton of money
Recent reporting by The Information revealed that OpenAI's contract with Microsoft includes a notably practical definition of AGI: a system that outperforms humans in most economically valuable tasks and generates at least $100 billion in profit.
This aligns with both OpenAI's revised view of AGI as a gradual progression, Microsoft CEO Satya Nadella's criticism of "nonsensical benchmark hacking" in pursuit of self-proclaimed AGI milestones and Altman himself telling his X-followers to "pls chill and cut your expectations 100x" in context of an imminent speculated AGI release.