Google's new AI Mode, agent features, and multimodal tools hint at the next big shift for search.
At the center of Google's overhaul is "AI Mode," now available to all users in the US. Originally previewed last year, the feature is built to handle longer, more complex queries and supports more personalized, multimodal, and agent-like interactions. Google calls it one of the biggest changes in the company's history.
The goal is to make search feel more like a conversation. After the initial query, users can ask follow-up questions, add images or graphics, and work with richer visualizations and data. The system runs on Gemini 2.5, Google's most advanced AI model to date.
From answers to actions
"AI Mode" also comes with new agent-like features, drawing from Google's Project Mariner research. It can now handle tasks like booking tickets, making restaurant reservations, or monitoring prices. The AI scans offers in real time, fills out forms, and suggests the best options, but users still make the final call on what to buy or book.
AI Mode can also connect to other Google services like Gmail and Google Drive. This opens the door to more personalized suggestions, based on your past searches, reservations, or documents. For example, if you're planning a trip, the AI can recommend activities or restaurants that fit your style.
Google hasn't said much about how AI Mode interacts with the rest of the web, which is where it pulls its answers from (spoiler: probably not very much). While the AI search cites external sources, early studies suggest users rarely click through to them—a trend that could upend business models for publishers and other website owners.
Try-on tools and AI checkout for shopping
Google is also rolling out new AI-powered shopping tools. Soon, users will be able to upload photos and virtually try on clothes, using a custom image generation model trained specifically for fashion. It's the first time this kind of large-scale virtual try-on has been built into search.
Google's new AI shopping tool lets users upload a photo and virtually try on clothes, powered by a fashion-specific image generation model. | Image: Google
Once you've found something you like, a new agent-driven checkout system can buy it for you at your chosen price. The AI tracks price changes, alerts you to discounts, and can even handle the entire purchase including payment. All of this is built on Google's "Shopping Graph," which tracks more than 50 billion products and updates every hour, according to the company.
Real-time, multimodal search
Google says these changes are just the beginning of an even bigger shift. The end goal is a universal, multimodal AI assistant (or "world model") that doesn't just answer questions, but also takes on tasks, anticipates needs, and boosts productivity across every device—from desktop and phone to XR headsets. The "Project Astra" research effort is driving much of this work.
A new feature called "Search Live" takes multimodal search a step further. Users can point their camera at anything around them and ask questions in real time—whether they're trying to identify an object, translate a sign, or solve an everyday problem. The AI analyzes the camera feed, explains what it sees, and links to additional resources. Originally developed as part of the Astra project, this feature is now available to US users through a live button in Google Search.