Content
summary Summary

Google is adding new visual search tools to AI Mode, letting users search for images using natural language and save results directly.

Ad

The update combines visual search with conversational queries. Users can enter a search, upload a photo, or start with an image, then ask follow-up questions to narrow down the results. Each image result links to its original source.

Video: Google

The new features build on Google's existing visual search, using the multimodal capabilities of Gemini 2.5. Google has introduced a "visual search fan-out" method that processes both images and text by launching multiple background searches at once to deliver more detailed results. Google says this approach is also being tested in other parts of its AI search, but hasn't shared technical details yet.

Ad
Ad
Diagram: Visual fan-out technique in AI mode, leading from analysis to visual response.
Google's "visual fan-out" technology scans both images and text, launching several background searches to deliver more thorough answers. | Image: Google

According to Google, the system is designed to recognize both main objects and smaller details in images, running several searches in parallel to better understand the visual context.

Shopping without filters

Shopping is a major focus. Instead of using filters, users can describe what they're looking for in plain English. For example, searching for "barrel jeans that aren't too wide" shows shoppable results, which can be further refined with follow-up requests like "show me more ankle-length options." On mobile, users can even search within a specific image.

Smartphone zeigt Google AI Mode mit visueller Suche nach maximalistischem Schlafzimmer-Design.
Visual search in AI Mode helps spark ideas from broad prompts, like this example for maximalist bedroom inspiration. | Image: Google

Google's shopping feature is powered by the Shopping Graph, which tracks more than 50 billion product listings and updates over 2 billion entries every hour.

The new visual AI mode launches this week in the US, available in English.

Google has also added more AI features to AI Mode, including Gemini 2.5 Pro and Deep Search for paying users, and an automated calling tool for local businesses.

Recommendation

Earlier this year at I/O 2025, Google previewed agent-based features and personalized results. Project Mariner aims to let AI handle tasks like booking tickets and offering virtual try-on tools for clothing.

Google faces competition from OpenAI, which recently launched an online shopping payment feature for ChatGPT. The feature lets users make instant purchases within chat, starting with Etsy and expanding to over a million Shopify stores. OpenAI and Stripe have also introduced the Agentic Commerce Protocol, an open-source solution for in-chat shopping.

Ad
Ad
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.
Support our independent, free-access reporting. Any contribution helps and secures our future. Support now:
Bank transfer
Summary
  • Google is extending its AI Mode with a visual search feature that lets users search for images using natural language, refine results, and shop directly. Both text and image inputs are accepted, and each result includes a link to the original source.
  • The feature leverages Gemini 2.5’s multimodal abilities and a "visual search fan-out" method, which identifies main and secondary elements in images and runs multiple background searches at once to deliver detailed answers.
  • Shopping integration is a core focus: users can specify what they want without relying on traditional filters, get relevant product suggestions, and further narrow their search. The feature is powered by Google’s Shopping Graph and is first available in English in the USA.
Sources
Jonathan writes for THE DECODER about how AI tools can improve both work and creative projects.
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.