Sam Altman, OpenAI's CEO, shared his ideas about the future of AI interaction at the company's DevDays event.
In a talk with Chief Product Officer Kevin Weil, Altman described an AI system that could change how we use computers and interact with the world.
Altman sees users being able to walk up to a "piece of glass" and "say whatever you want." Advanced reasoning models and agents would then create custom interfaces for each request in real-time. Users could interact by talking or by navigating a personalized video feed.
"It will completely change how we use computers and make things happen in the world," Altman said. "It's going to be pretty wild."
OpenAI's recent team-up with Jony Ive, Apple's former design boss, makes Altman's comments even more interesting. They're working on a new kind of AI device for everyday users. It won't look like your typical smartphone, and you'll mostly control it by talking to it. This fits right in with what Altman is describing for the future of AI.
AI agents for complex tasks
Altman sees OpenAI moving next from chatbots to AI agents, which he expects will allow people to complete tasks that take months in just an hour. "This is going to be a very significant change in the way the world works in a short period of time," Altman said. It also fits with the idea of an AI device that can handle complex requests, such as booking travel or summarizing news, on its own.
"By 2030 or so, we'll look back and be like, 'Yeah, this is just what a human is supposed to be capable of.' What a human used to grind at for years, I can now just ask a computer to do it, and it's done in an hour. We'll wonder why it's not done in a minute."
While OpenAI is getting closer to achieving this capability in AI models, the main obstacles for such systems are trust, security, and alignment, Altman said.
OpenAI plans to develop models for these uses in the coming months. The company's new o1 model, which has better logic skills, is a first step in this direction.