OpenAI's "Q*" project was quickly labeled a secret AGI project. Now, returning OpenAI CEO Sam Altman weighs in.
Altman indirectly confirms Q without giving any details about the project. When asked by The Verge's Alex Heath what Q* was about, Altman replied that it was an "unfortunate leak" that he did not want to comment on.
Altman won't talk openly about the project, but if Q* was a made-up story, he could have gotten that out of the way.
But he is happy to reiterate previous OpenAI statements: the company expects rapid progress in AI technology to continue, and OpenAI is working to make it safe and useful.
"Without commenting on any specific thing or project or whatever, we believe that progress is research. You can always hit a wall, but we expect that progress will continue to be significant. And we want to engage with the world about that and figure out how to make this as good as we possibly can," says Altman.
Q* Theories
According to a Reuters report, OpenAI's alleged AI breakthrough, known as Q*, is said to have raised internal fears about a potential threat to humanity. It is also rumored to have played a role in Altman's 4.5-day firing, allegedly over safety concerns.
The exact nature of Q* is unclear. According to Reuters, Q* can solve some simple mathematical problems. Experts speculate that it might be a combination of Large Language Models (LLM) and planning.
Q* could use non-linear methods such as Tree-of-Thoughts, Monte-Carlo Tree Search (MCTS), Process-Supervised Reward Models (PRMs), and a learning algorithm such as Q-Learning.
Test-time compute, a concept that refers to the time it takes an AI system to find an answer, is also likely to play an important role in Q*'s performance.
Even without bringing AGI to the world, Q* could be a building block of the next generation of AI systems, more reliable and capable than today's systems like ChatGPT.