Content
summary Summary

Artificial general intelligence is the big goal of OpenAI. What exactly that is, however, is still up for debate.

According to OpenAI CEO Sam Altman, systems like GPT-4 or GPT-5 would have passed for AGI to "a lot of people" ten years ago. "Now people are like, well, you know, it's like a nice little chatbot or whatever," Altman said.

The phenomenon Altman describes has a name: It's called the "AI effect," and computer scientist Larry Tesler summed it up by saying, "AI is anything that has not been done yet."

AGI definition needed within the next decade

The AI effect shows how difficult it is to define intelligence, and how the term is used differently in different contexts, such as behavior or cognition.

Ad
Ad

Altman doesn't see this as a problem: "I think it's great that the goalposts keep getting moved. It makes us work harder."

However, the technology is approaching a stage that is at least close enough to OpenAI's understanding of AGI that a definition will need to be agreed upon within the next decade if not sooner, Altman said.

OpenAI's chief technology officer, Mira Murati, defines AGI as a system that can generalize and take over human work in many areas. Altman believes that AI will initially take over average services, but that human experts will continue to outperform machines in their fields.

Both OpenAI representatives see AI as the most important tool for human progress in the coming decades. They agree that AI could lead to major disruptions in the workforce. Preparing for these changes and engaging as many people as possible in the AI discussion is important, they say.

Altman added that AI systems are becoming increasingly personal. That is why people need to be aware that AI is a tool, not a person.

Recommendation

Microsoft and OpenAI are friends, not frenemies

Neither Altman nor Murati would comment specifically on the status of GPT-5. OpenAI is constantly working on new technologies, Murati said. As systems become more capable, reliability and security become more critical.

For example, whether GPT-5 will stop producing false information, also known as hallucinations, is an open research question, according to Murati. "It's unknown, it's research," said Murati, who seems optimistic that it's doable. Microsoft founder Bill Gates recently said OpenAI technology might have already reached a plateau, but he also believes it can become more reliable in the next two to five years.

Altman also comments on the relationship with Microsoft. There have been reports of tension between OpenAI and its main investor, for example over differing views on AI safety, and because both are fighting for the same customers with similar products.

"Yeah, I won't pretend that it's like a perfect relationship but nowhere near the frenemy category.
It's really good. Like we have our squabbles," Altman said. Both Microsoft and OpenAI are "super aligned" on the goal of getting OpenAI's models used as much as possible, Altman said.

Ad
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.
Ad
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.

New standards for AI training data

Altman emphasized that it will be important for future systems to use data for AI training with people's consent, and to develop new standards for data use.

Consent does not necessarily mean that OpenAI will pay for data. Altman suggests that stakeholders may volunteer data if they better understand the overall societal benefits of AI. OpenAI is experimenting in this area, including with paid partnerships.

"It may be a new way that we think about some of these issues around data ownership and how economic flows work," Altman said.

AI models may also need less training data as they become smarter and more capable, shifting the focus from the quantity to the value of the training data, Altman said. "What really will matter in the future is particularly valuable data."

Altman sees the "existential proof" of this thesis in the way humans learn compared to machines. Humans need less data to learn because they are better at understanding broader concepts.

OpenAI and other AI companies are currently involved in numerous lawsuits alleging copyright infringement for using proprietary data to train AI.

Watch the full interview with Altman and Murati in the video below.

Support our independent, free-access reporting. Any contribution helps and secures our future. Support now:
Bank transfer
Summary
  • The goal of OpenAI, led by CEO Sam Altman, is to develop Artificial General Intelligence (AGI). But the definition and understanding of AGI is constantly evolving.
  • Systems like GPT-4 or GPT-5, which some people would have considered AGI ten years ago, are now considered "nice little chatbots," says Altman.
  • Altman emphasizes the need to use data to train AI with people's consent and to develop new standards for data use.
Online journalist Matthias is the co-founder and publisher of THE DECODER. He believes that artificial intelligence will fundamentally change the relationship between humans and computers.
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.