Content
newsletter Newsletter

Sam Altman is chief executive at OpenAI, arguably the most important AI company these days with Deepmind and Google. Now he talks about the next AI projects like GPT-4 and general artificial intelligence.

Last year, the AI company OpenAI showed the huge language model GPT-3 with 175 billion parameters. The size of the neural network pays off: GPT-3 generates, translates or summarizes text with unprecedented quality. The new text AI far surpasses the performance of GPT-1 (2018) and GPT-2 (2019). GPT-3 is accessible via an API provided by OpenAI to selected partners and enterprises.

The impressive performance of text AI started a worldwide trend of large AI models to match or surpass GPT-3. In Europe, for example, the OpenGPT-X large AI model is emerging.

Then, in early 2021, OpenAI showed the next advances: The DALL-E and CLIP multimodal AI models are trained with text and image data and therefore offer additional capabilities. In June, OpenAI showed the code-specialized GPT-3 variant Codex, a demonstration that large, more general AI models can be specialized for individual tasks downstream.

Ad
Ad

Altman leads informal conversation about OpenAI's future - and has summary removed

Now, during an online meetup, OpenAI CEO Sam Altman talked about the future of GPT, DALL-E, and general artificial intelligence.

Altman made his comments on the future topics at an event with about 250 attendees, mostly from the LessWrong online community. LessWrong is increasingly concerned with artificial intelligence and the alignment problem, i.e. the question of how a possible general artificial intelligence can be designed safely.

Altman's statements in this text come from a summary of the event posted by a user on LessWrong. No recording of the event exists, and the summary was removed at Altman's request because he wanted a more off-the-record exchange.

Altman's statements are therefore not to be taken as an official announcement, but arguably still provide a glimpse into the future of OpenAI.

GPT-4 comes after Codex improvements

A few weeks ago, initial beta testers were given API access to the Codex AI. The model is now to be steadily improved through feedback from users and progress is already being made, according to Altman.

Recommendation

The team's focus is therefore on Codex. The available computing power will be used to further develop the Codex model, he said. Codex is less than a year away from changing the way programmers work, Altman describes.

Asked about the financial success of the projects to date, Altman says the API accesses to Codex and GPT-3 are profitable, but cannot yet pay for next-generation AI.

Altman also confirms work on GPT-4: The AI model, like GPT-3, will be a text-only model instead of a multimodal AI model like DALL-E.

GPT-4 probably won't be much larger than GPT-3, but will require significantly more computing power, Altman said. Progress should come primarily from higher-quality data, better algorithms, and more precise fine-tuning.

Ad
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.
Ad
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.

GPT-4 will also be able to handle more context, the OpenAI chief said. GPT-3's current limit is 2,048 tokens, while Codex's is 4,096 tokens.

OpenAI aims to use smaller AI models more efficiently

In August, Cerebras unveiled its new CS-2 supercomputer, which is designed to enable training of AI models with up to 120 trillion parameters. In an accompanying interview with WIRED, Cerebras CEO Andrew Feldman said he learned in a conversation with OpenAI that GPT-4 would be 100 trillion parameters in size and would not appear for several years.

At the LessWrong event, however, Altman stressed that a possible 100 trillion parameter AI model would be a long time coming. OpenAI would also not target this mark for GPT-4.

OpenAI is currently improving the efficiency of smaller AI systems, Altman said. Perhaps, therefore, there is no need for AI models of even more gigantic proportions. In the future, many people will be surprised at how much more powerful AI can become without parameter growth, Altman says.

DALL-E and the bet on multimodal AI models

Asked about multimodal models and DALL-E, Altman acknowledges that DALL-E cannot yet outperform plain text models in natural language processing. However, he expects multimodal models to surpass pure text models in speech generation in the upcoming years. If that doesn't happen, it would call into question OpenAI's bet on the performance of multimodal AI models.

In the future, Altman hopes to see numerous multimodal models trained for specific domains such as education, law, biology or therapy. Because of the high computational requirements, he expects these models to come from very few companies.

DALL-E itself is also slated for release. Altman is not revealing details about the planned date or funding model.

Computing power is not a bottleneck for general AI

Altman expects further advances in AI research in the 2030s. The OpenAI chief sees a 50 percent chance that artificial intelligence (news) will take over numerous tasks from humans by 2035.

However, he said, this does not mean that artificial general intelligence has yet been achieved. Its emergence is unlikely to be a binary moment, Altman speculates. Instead, he expects a gradual development.

Altman sees AI systems that improve themselves as the critical moment - then the moment has come to be very attentive. If this capability emerges unexpectedly quickly, he would change his mind. Then, he says, explosive AI development is possible.

Altman does not see computing power as a possible bottleneck for the development of general AI: With sufficient investment, the necessary hardware for general AI is probably already available. Instead, in addition to larger AI models, scientific breakthroughs in algorithms are needed. Another relevant question is whether consciousness and intelligence can be separated. Altman describes this question as a central ethical aspect of AI research.

Support our independent, free-access reporting. Any contribution helps and secures our future. Support now:
Bank transfer
Max is managing editor at THE DECODER. As a trained philosopher, he deals with consciousness, AI, and the question of whether machines can really think or just pretend to.
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.