At GTC 2024, OpenAI COO Brad Lightcap spoke about generative AI - and what's to come.
In an interview with Manuvir Das, Vice President of Enterprise Computing at Nvidia, Lightcap explained how his company is helping enterprise customers implement generative AI.
Toward the end, however, the discussion turned to what's missing for useful AI agents and OpenAI's plans for the coming years.
"Two things have to happen: One, the reasoning ability of the model needs to improve, and two, you need to give it some ability to have actuators that basically take action in the world. And I think those are kind of the next two waves that we're going to see merge," Lightcap says of AI agents.
According to him, OpenAI is expected to "really accelerate" reasoning capabilities - which would then enable models capable of solving multi-step problems.
"Plenty of scope for future scaling"
When asked where he sees his company this year and three years from now, Lightcap was cautious. What he could say, however, was: "We don't think we're anywhere near the ceiling for improving the core capabilities of these models. We think there's a lot of room for future scale ops, and we're very excited about that."
The team is trying to understand how "we can move the models along axes that are not just raw IQ," he said, without going into more detail. "I think we feel really good about where that work is going."
Scaling, then, still seems to be an important component for OpenAI - whether in the realm of data, model size, or computation. And the focus on reasoning - the ability to draw logical conclusions - is in line with the rumors surrounding Q*.