AI in practice

OpenAI CTO says AI could reach PhD level in certain fields in 18 months

Matthias Bastian
CTO Mira Murati on a stage, speaking and gesticulating

Dartmouth Engineering, YouTube Screenshot

OpenAI CTO Mira Murati says AI systems could reach the intelligence of a PhD in certain fields in as little as a year and a half.

Murati says that older AI systems like GPT-3 have the intelligence of a toddler, while GPT-4 is on par with a smart high school student.

"And then in the next couple of years, we're looking at PhD-level intelligence for specific tasks," Murati explained in an interview with Dartmouth Engineering. When asked specifically when AI might reach that level, she said, "A year and a half, let's say."

Murati points to agent-based AI systems connected to the Internet that can communicate, network, and collaborate as a key driver of future AI advances.

Microsoft CTO Kevin Scott recently said that the next generation of AI models will likely pass Ph.D. qualifying exams and make breakthroughs in logical reasoning. However, Scott stresses that real-world application is crucial.

Murati sees major potential for AI in education. "We have an opportunity to basically build super high-quality education and very accessible and ideally free for anyone in the world in any of the languages or cultural nuances that you can imagine," she said.

Murati stresses that developing advanced AI must happen alongside safety considerations. She advocates for stronger regulation of cutting-edge models with special capabilities, as these could be misused.

"It can be that you sort of develop the technology and then you have to figure out how to deal with these issues," Murati says. "You kind of have to build them alongside the technology and actually in a deeply embedded way to get it right."

OpenAI has recently come under fire from former employees for its safety practices, and it disbanded its super AI safety team. However, Murati seems to be talking about current relevant issues such as misinformation, and less about the potential of an unaligned super AI taking out humanity.

OpenAI recently formed a new safety committee and put an ex-NSA leader on it, which Edward Snowden thinks is a bad idea.