Prominent AI researcher Stuart Russell is sounding the alarm about runaway expectations in artificial intelligence, some of which he and others in the field helped create.
Russell warns that the current hype could easily turn into a bubble, with investors and companies pulling out en masse if momentum slows. "They just all run for the exits as fast as they can," Russell tells the Financial Times. "And so things can collapse really, really, really fast." He points to the AI winter of the 1980s, when, as he recalls, "[the systems] were not making any money, we were not finding enough high-value applications."
Russell's comments carry extra weight given his own history. In 2023, he signed the now-infamous pause letter, which called for a temporary halt to AI development over safety concerns—back then, the worry was that things were moving too fast. Now, it seems like he sees the opposite risk: the industry is overheating on sky-high expectations that could collapse suddenly.
Ironically, the pause letter itself may have helped fan the flames by suggesting that AI systems were on the verge of an uncontrollable breakthrough. That narrative was amplified by things like Sam Altman's grandiose blog posts and similar statements from other tech and AI leaders, reinforcing the belief among investors that AI (and especially AGI) was about to surpass human capability and disrupt the entire economy overnight.
GPT-5 as a reality check
GPT-5 has quickly become a symbol of the shifting mood in the AI industry. Speculation about a slowdown in generative AI has only intensified with its release, which some found underwhelming. The disappointment isn't really about the model's technical performance—GPT-5 brings predictable improvements and is much more cost-effective—but about the gap between months of breathless promises and a reality that feels much more ordinary.
"For GPT-5 […] people expected to discover something totally new. And here we didn't really have that," says Thomas Wolf, co-founder of Hugging Face. Even Altman recently acknowledged that the industry might be in a bubble.
Meta's chief AI scientist Yann LeCun also points to the limits of today's large language models, noting that gains from "pure LLMs trained with text" are starting to slow—a point he's been making for years. He still sees potential in multimodal deep learning models that can learn from videos and other types of data.
Russell's warning comes at a critical moment, as the industry now needs real commercial traction and sustainable, paid use cases to justify the billions already invested and the trillions more that could follow, according to Altman. Without that, a sudden shift in sentiment could send the hype crashing down, no matter how useful the technology turns out to be in everyday life.
Much of the current excitement centers on so-called agentic AI systems, which are supposed to handle complex tasks over extended periods. But it's still unclear whether these new architectures are reliable enough to justify the steep price tags that companies like OpenAI are floating—sometimes as high as $20,000 a month—on the promise that they're worth the investment. In particular, agent-based AI still faces major challenges around reliability and cybersecurity.