Content
summary Summary

OpenAI, the company behind ChatGPT and numerous other commercial AI applications, has long pursued the goal of developing artificial general intelligence (AGI) that "benefits all of humanity." Now, the organization is significantly revising its perspective on how this transformative technology might emerge.

Ad

In a recent blog post, OpenAI explains that it no longer expects artificial general intelligence to arrive as a dramatic breakthrough moment where AI systems suddenly achieve superhuman capabilities. Instead, the company now views AGI development as a continuous process marked by steady, incremental improvements.

OpenAI's changing perspective on AGI has also led to a more practical approach to safety. Instead of planning for theoretical future scenarios, the company now wants to learn from hands-on experience with today's AI systems. Of course, this aligns well with their commercial strategy.

The company aims to build safety measures that grow stronger alongside advancing AI capabilities. Recently, they unveiled a new safety system that can scale up as AI becomes more powerful, potentially even to AGI-level capabilities.

Ad
Ad

Throughout all of this, OpenAI maintains that keeping humans in control is crucial. They believe society should have a say in how AI systems behave and what values they reflect. To make this possible, they're building tools that help people communicate clearly with AI systems and stay in control, even when dealing with AIs that might become more capable than humans in certain areas.

Learning from past caution

OpenAI now views some of its earlier safety decisions differently. The company points to its 2019 decision to temporarily withhold GPT-2 from public release as an example of "outsized caution" that stemmed from viewing AGI as a sudden breakthrough. Back then, the company only released a smaller version of the model "due to concerns about large language models being used to generate deceptive, biased, or abusive language at scale."

However, Miles Brundage, former Head of Policy Research at OpenAI, challenges OpenAI's new interpretation of GPT-2's withhold release back then. He argues that GPT-2's gradual release already reflected the philosophy of iterative development, and many safety experts appreciated this careful approach given the information available at the time.

AGI is what makes a ton of money

Recent reporting by The Information revealed that OpenAI's contract with Microsoft includes a notably practical definition of AGI: a system that outperforms humans in most economically valuable tasks and generates at least $100 billion in profit.

This aligns with both OpenAI's revised view of AGI as a gradual progression, Microsoft CEO Satya Nadella's criticism of "nonsensical benchmark hacking" in pursuit of self-proclaimed AGI milestones and Altman himself telling his X-followers to "pls chill and cut your expectations 100x" in context of an imminent speculated AGI release.

Ad
Ad
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.
Recommendation
Support our independent, free-access reporting. Any contribution helps and secures our future. Support now:
Bank transfer
Summary
  • OpenAI has abandoned the idea that Artificial General Intelligence (AGI) will be a single, dramatic moment. Instead, the company views AGI as a continuous evolutionary process of incremental improvements.
  • OpenAI's safety strategy is now based on an empirical approach, where safety measures are developed and improved through real-world experience with current systems. Of particular importance are security practices that become stronger as AI capabilities increase.
  • The company emphasizes the central role of human control over powerful AI systems. It is working on mechanisms that allow people to clearly communicate their intentions and effectively control AI systems, even when their capabilities exceed those of humans.
Sources
Matthias is the co-founder and publisher of THE DECODER, exploring how AI is fundamentally changing the relationship between humans and computers.
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.