Ad
Short

Demis Hassabis, CEO of Google Deepmind, expects the next year to bring major progress in multimodal models, interactive video worlds, and more reliable AI agents. Speaking at the Axios AI+ Summit, Hassabis noted that Gemini's multimodal capabilities are already powering new applications. He used a scene from "Fight Club" to illustrate the point: instead of just describing the action, the AI interpreted a character removing a ring as a philosophical symbol of renouncing everyday life. Google's latest image model uses similar capabilities to precisely understand visual content, allowing it to generate complex outputs like infographics, something that wasn't previously possible.

Hassabis says AI agents will be "close" to handling complex tasks autonomously within a year, aligning with the timeline he predicted in May 2024. The goal is a universal assistant that works across devices to manage daily life. Deepmind is also developing "world models" like Genie 3, which generate interactive, explorable video spaces.

Ad
Ad
Short

Yann LeCun, Meta's outgoing AI scientist, is launching a new startup built around "world models" - systems designed to understand physical reality rather than just generate text. LeCun argues that Silicon Valley is currently "hypnotized" by generative AI, and he intends to build his project with a heavy reliance on European talent. According to Sifted, the company will operate globally and maintain a hub in Paris.

LeCun contends that today's language models lack any real grasp of how the world works, and that simply scaling them up won't lead to human-level intelligence. His project, called AMI (Advanced Machine Intelligence), relies on a new architecture that moves away from generative methods entirely.

"Some people claim we can scale up current technology and get to general intelligence [...] I think that's bullshit, if you'll pardon my French."

Meta is signing on as a partner because Mark Zuckerberg supports the initiative, though LeCun notes that potential applications extend well beyond Meta's specific interests. He plans to leave the tech giant at the end of the year to lead the independent organization.

Short

Aikido Security warns that plugging AI agents into GitHub and GitLab workflows opens up a serious vulnerability in enterprise environments. The issue hits widely used tools like Gemini CLI, Claude Code, OpenAI Codex, and GitHub AI Inference.

According to the security firm, attackers can slip hidden instructions into issues, pull requests, or commits. That text then flows straight into model prompts, where the AI interprets it as a command instead of harmless content. Because these agents often have permission to run shell commands or modify repos, a single prompt injection can leak secrets or alter workflows. Aikido says tests showed this risk affected at least five Fortune 500 companies.

Aikido

Google patched the issue in its Gemini CLI repo within four days, according to the report. To help organizations secure their pipelines, Aikido published open search rules and recommends limiting the tools available to AI agents, validating all inputs, and avoiding the direct execution of AI outputs.

Ad
Ad
Ad
Ad
Short

Google AI just released an updated "Deep Think" mode for Google AI Ultra subscribers using the Gemini app. Built on the Gemini 3 model, the feature aims to boost the AI's reasoning skills. Google says the mode uses "advanced parallel thinking" to investigate multiple hypotheses at the same time, making these models better suited for complex scientific tasks than for mundane office work.

The technology "builds on" on the Deep Think variant of Gemini 2.5, which recently posted impressive scores at the International Mathematical Olympiad and a major programming competition. To try it out, subscribers select "Deep Think" in the app's input field and choose the "Gemini 3 Pro" model from the menu. The Ultra subscription currently costs $250 per month for the standard plan.

The release looks like a direct response to DeepsSeek's new open-source math model and an upcoming system from OpenAI. Reports suggest OpenAI plans to launch its new model next week, with performance expected to outperform Gemini 3.

Google News