Google's WebMCP moves the web closer to becoming a structured database for AI agents
Key Points
- Google has introduced WebMCP, a new interface that brings the Model Context Protocol concept to the web, allowing AI agents to interact with websites in a structured way.
- Security remains a major unresolved concern: Google's developer says protecting against prompt injection attacks is the responsibility of individual AI agents, not the API itself, while even leading models still fail against targeted attacks at alarming rates.
- The push toward an "agentic web" poses a real threat to website operators, who risk losing ad revenue, direct customer relationships, and user engagement as their sites increasingly become background infrastructure for AI systems.
Google is pushing the web closer to becoming a database for AI agents. WebMCP is a new interface designed to let websites communicate with AI agents in a standardized way.
WebMCP is essentially Google's take on the Model Context Protocol (MCP), built specifically for the web. It brings the MCP concept to websites so AI agents can interact with them in a structured way through the browser, instead of clumsily parsing through page code like they do now.
According to Google developer Andre Cipriani Bandarra, WebMCP defines structured tools that AI agents can use to perform specific actions on websites, like booking flights, creating support tickets, or searching for products.
Customer support: Help users create detailed customer support tickets, by enabling agents to fill in all of the necessary technical details automatically.
Ecommerce: Users can better shop your products when agents can easily find what they're looking for, configure particular shopping options, and navigate checkout flows with precision.
Travel: Users could more easily get the exact flights they want, by allowing the agent to search, filter results, and handle bookings using structured data to ensure accurate results every time.
Google
The system uses a declarative API for simple actions using HTML forms and an imperative API for more complex processes using JavaScript. The goal is to make agents faster and more reliable than having them navigate raw page code. In short, websites become "agent-ready," as Bandarra puts it. "Imagine an agent that can handle complex tasks for your users with confidence and speed."
On the security front, Bandarra doesn't see the API itself as responsible for protection. Defending against prompt injection attacks, where bad actors manipulate AI agents through injected instructions, falls on the individual agent, not the API. Good luck with that.
WebMCP is currently available as an early preview for developers through Google's early preview program. Down the road, Google could integrate WebMCP into Chrome and its AI services like Gemini, letting browser agents interact directly with websites for things like automated purchases or travel bookings.
The "agentic web" is still more vision than reality
The "agentic web" describes a future where AI agents browse, transact, and interact with online services autonomously on behalf of users. But the reality is far messier than the pitch.
Even OpenAI has acknowledged that prompt injection may never be fully solved, an admission that came after red teaming its own agentic tools uncovered new prompt attack vectors. Anthropic's Claude Opus 4.5 still falls for targeted prompt attacks more than three times out of ten, an unacceptable failure rate for any system expected to handle transactions or make decisions on its own.
While agentic capabilities are quietly becoming standard in frontier models like GPT 5.2, Claude Opus 4.5, and Gemini 3 Pro, the security trade-off remains stark: the more autonomously an agent operates, the larger its attack surface becomes. For now, the industry's practical answer is heavily constrained agents with close human oversight, not fully autonomous systems navigating the open web.
Website operators face a new dilemma
Google isn't the only company trying to rebuild the web for AI agents. In mid-2025, Microsoft introduced NLWeb, an open-source project that gives websites a natural language interface. Each NLWeb instance can act as an MCP server, making website content accessible to other agents in the MCP ecosystem. Microsoft sees NLWeb as a potential standard for the "agentic web," something like HTML was for the classic web.
For website operators, this shift creates a real problem. If AI agents handle tasks like product searches, price comparisons, content consumption, or bookings on their own, fewer people need to visit the actual website. That means operators could lose ad revenue, direct customer contact, and the chance to win users over with their own offerings. The web, in other words, is increasingly becoming background infrastructure.
AI News Without the Hype – Curated by Humans
As a THE DECODER subscriber, you get ad-free reading, our weekly AI newsletter, the exclusive "AI Radar" Frontier Report 6× per year, access to comments, and our complete archive.
Subscribe now