OpenAI is selling AI like no other company on the market. But according to OpenAI CEO Sam Altman, the overarching goal remains Artificial General Intelligence (AGI), or as he puts it, "magic intelligence in the sky." Unfortunately, magic is expensive.
OpenAI CEO Sam Altman told the Financial Times that he is interested in securing additional funding from Microsoft, the startup's largest investor, to develop artificial general intelligence (AGI).
Microsoft invested $10 billion in OpenAI earlier this year. Altman is hoping for more funding from Microsoft and other investors to cover the high cost of developing advanced AI models.
Selling AI is a means to an end
"Right now, people [say] ‘you have this research lab, you have this API [software], you have the partnership with Microsoft, you have this ChatGPT thing, now there is a GPT store’. But those aren’t really our products," Altman said. "Those are channels into our one single product, which is intelligence, magic intelligence in the sky. I think that’s what we’re about."
The partnership with Microsoft is designed to allow both companies to benefit from each other's success, Altman said. Despite growing revenues this year, OpenAI is not yet profitable.
As for GPT-5, Altman said it is technically difficult to predict what new capabilities the model might have over its predecessors.
"Until we go train that model, it’s like a fun guessing game for us. We’re trying to get better at it, because I think it’s important from a safety perspective to predict the capabilities. But I can’t tell you here’s exactly what it’s going to do that GPT-4 didn’t."
Greg Brockman, co-founder of OpenAI, recently told French Prime Minister Emmanuel Macron that predictability is an essential safety feature for future AI.
Altman doesn't think language alone is enough for AGI. Language is a unique way of compressing information, and thus an important factor in the development of intelligence, which his startup first noticed while developing GPT-3, he said.
But the biggest missing piece of the puzzle for an AI, Altman said, is teaching systems a basic understanding of things, so they can develop new knowledge for humanity. "I think that’s the biggest thing to go work on."
Yann LeCun, head of Meta's AI department, and several other researchers also believe that while large language models (LLMs) are a building block for even more capable AI systems, AGI cannot be achieved by simply scaling LLMs. It would be like trying to reach the moon by building longer ladders.
AI models could get smaller again
Meanwhile, the AI industry, and model developers in particular, are working to make generative AI cheaper and more efficient. In particular, Microsoft, which uses generative AI extensively in its products, is said to be researching smaller models.
It was recently revealed that OpenAI's LLM GPT-3.5, which outperforms the original GPT-3 as a chatbot thanks to RHLF, has only 20 billion parameters. The original GPT-3, released in 2020, has 175 billion parameters.
The new GPT-4 Turbo is also likely a distilled version of the original GPT-4, as evidenced by the fact that OpenAI has significantly reduced the inference price, which means the model is more efficient and much faster. OpenAI's AI prototypes are named after the Gobi, Sahara, and Arrakis deserts to emphasize their focus on resource efficiency.
A new rumor from OpenAI leaker "Jimmy Apples" on Twitter.com also points in the direction of a focus on efficiency: by the end of 2025, OpenAI should have an AI model on the development schedule that is significantly better than GPT-4, but has only one to ten billion parameters, i.e. a fraction of the alleged 1.8 trillion parameters of the current GPT-4 model.
When he introduced GPT-4 Turbo, Altman called GPT-4 Turbo OpenAI's "smartest" model. This is a rather open-ended choice of words, since "smart" can mean many things, including higher resource efficiency relative to performance. Altman would probably have said that GPT-4 Turbo is more intelligent or more capable than GPT-4 if that were the case.