Read full article about: Deepmind co-founder Shane Legg sees 50 percent chance of "minimal AGI" by 2028
Deepmind co-founder Shane Legg puts the odds of achieving "minimal AGI" at 50 percent by 2028. In an interview with Hannah Fry, Legg lays out his framework for thinking about artificial general intelligence. He describes a scale running from minimal AGI through full AGI to artificial superintelligence (ASI). Minimal AGI means an artificial agent that can handle the cognitive tasks most humans typically perform. Full AGI covers the entire range of human cognition, including exceptional achievements like developing new scientific theories or composing symphonies.
Legg believes minimal AGI could arrive in roughly two years. Full AGI would follow three to six years later. To measure progress, he proposes a comprehensive test suite: if an AI system passes all typical human cognitive tasks, and human teams can't find any weak points even after months of searching with full access to every detail of the system, the goal has been reached.
Read full article about: Deepmind CEO Demis Hassabis predicts three major AI trends for 2026
Demis Hassabis, CEO of Google Deepmind, expects the next year to bring major progress in multimodal models, interactive video worlds, and more reliable AI agents. Speaking at the Axios AI+ Summit, Hassabis noted that Gemini's multimodal capabilities are already powering new applications. He used a scene from "Fight Club" to illustrate the point: instead of just describing the action, the AI interpreted a character removing a ring as a philosophical symbol of renouncing everyday life. Google's latest image model uses similar capabilities to precisely understand visual content, allowing it to generate complex outputs like infographics, something that wasn't previously possible.
Hassabis says AI agents will be "close" to handling complex tasks autonomously within a year, aligning with the timeline he predicted in May 2024. The goal is a universal assistant that works across devices to manage daily life. Deepmind is also developing "world models" like Genie 3, which generate interactive, explorable video spaces.
Comment
Source: Axios via YouTube
Read full article about: Isomorphic Labs prepares for its first human trials with drugs designed by AlphaFold
Isomorphic Labs, a Deepmind spin-off focused on drug discovery, is getting ready for its first clinical trials with drugs designed using AlphaFold-based AI models.
"We're staffing up now. We're getting very close," said Colin Murdoch, President of Isomorphic Labs and Chief Business Officer at Deepmind, in an interview with Fortune.
The company wants to overhaul the traditionally slow and expensive process of drug development, with anti-cancer drugs already in the pipeline. Isomorphic Labs has signed agreements with Eli Lilly and Novartis, and in 2025 closed a USD 600 million investment round led by Thrive Capital.
Looking ahead, Isomorphic Labs has even bigger ambitions for AI in medicine. "One day we hope to be able to say— well, here's a disease, and then click a button and out pops the design for a drug to address that disease," Murdoch said.
Read full article about: Deepmind expert says trimming documents improves accuracy despite large context windows
How useful are million-token context windows, really? In a recent interview, Nikolay Savinov from Deepmind explained that when a model is fed many tokens, it has to distribute its attention across all of them. This means focusing more on one part of the context automatically leads to less attention for the rest. To get the best results, Savinov recommends including only the content that is truly relevant to the task.
I'm just talking about-- the current reality is like, if you want to make good use of it right now, then, well, let's be realistic.
Nikolay Savinov
Recent research supports this approach. In practice, this could mean cutting out unnecessary pages from a PDF before sending it to an AI model, even if the system can technically process the entire document at once.
Comment
Source: Google via YouTube
Read full article about: Polite prompts can improve AI responses, says Deepmind researcher
Does saying "please" and "thank you" really help when talking to AI? According to Murray Shanahan, a senior researcher at Google Deepmind, being polite with language models can actually lead to better results. Shanahan says that clear, friendly phrasing—and using words like "please" and "thank you"—can improve the quality of a model's responses, though the effect depends on the specific model and the context.
There's a good scientific reason why that [being polite] might get better performance out of it, though it depends – models are changing all the time. Because if it's role-playing, say, a very smart intern, then it might be a bit more stroppy if not treated politely. It's mimicking what humans would do in that scenario.
Murray Shanahan
Comment
Source: Google Deepmind via YouTube