Article

x

Two Point O

Your website was built for humans: AI agents need something different.

A profile picture of Bert Swinnen
Bert Swinnen
Agentic web
Composable
AI
Digital front office

Introduction

AI agents are starting to browse, transact, and retrieve information on behalf of users, and a growing share of web traffic is not human. The decisions being made right now about content models, architecture, and API design will determine how well organisations perform in a web that increasingly runs on agents rather than browsers.

On 10 February 2026, Google announced WebMCP, a new browser API designed to address exactly that. We are subscribed to the early preview. Here is what has actually moved the needle so far, and why this one has caught our attention.

First, a bit of context

If you have been following the conversation around AI and the web, you will have noticed a pattern. Every few months, a new standard or approach emerges promising to make content more readable for machines. Some of it sticks. Others, like llms.txt, definitely did not.

So when something new comes along, a healthy amount of scepticism is fair.

That said, some recent developments have genuinely changed things. Cloudflare’s Markdown for Agents is one of them. The premise is straightforward: AI agents browsing the web are forced to parse bloated HTML. Navigation bars, wrapper divs, styling, none of it carries meaning, but all of it costs tokens. This blog post costs 16,180 tokens in HTML. In markdown, it is 3,150. That is an 80% reduction, and it compounds across every page an agent reads. (source: cloudflare)

Cloudflare handles the conversion at the network level. When an AI crawler requests a page with Accept: text/markdown, Cloudflare converts the HTML to clean markdown on the fly and returns it directly. No changes needed at the origin. And btw. we enabled it on our own site. You can try it yourself:

curl https://www.two-point-o.com/cloudflare/ -H "Accept: text/markdown"

Worth noting: tools like Claude Code and OpenCode already send these accept headers by default. The behaviour is already out there. Markdown for Agents just makes sure your site responds to it properly.

Alongside that, we autogenerate FAQ schema beneath every insight on this site, implement structured data, and design content models as proper ontologies that machines can reason about rather than just parse. The direction has been consistent: make content legible and actionable for machines, not just humans.

WebMCP fits that same direction, and it could be different from what came before. Not because it is a radical new idea, but because it builds on something that is already working. Anthropic’s Model Context Protocol (MCP) has become a de facto standard for connecting AI models to tools and data sources. WebMCP extends that same logic into the browser layer. Instead of MCP servers exposing backend tools to models, WebMCP lets websites expose their own actions directly to agents browsing the web.

Google’s weight behind it matters. Community momentum behind MCP already matters. And for anyone building headless, composable architecture, the fit is almost immediate.

What WebMCP actually does

Right now, browser-based AI agents, the kind powering tools like ChatGPT’s operator mode, navigate websites much like a human would. They read the DOM, infer what is clickable, and attempt to interact. It works. Sort of. Until it does not. A layout change, a dynamic element, an unexpected state, and the whole flow breaks. WebMCP flips that. Instead of the agent guessing, your site tells it what it can do.

By defining these tools, you tell agents how and where to interact with your site. This direct communication channel eliminates ambiguity and allows for faster, more robust agent workflows.

André Cipriani BandarraLead on Chrome | Google

Two APIs make this possible.

  • The declarative API handles standard, predictable interactions, things you can define directly in HTML. Think structured form flows, clear navigation paths, simple transactional steps.

  • The imperative API is where it gets genuinely interesting. It lets you register complex, dynamic actions through JavaScript, directly on navigator.modelContext. The agent does not need to infer what a button does. You tell it, explicitly, what tools are available, what inputs they expect, and what they return.

What this looks like in practice

Screenshot of the Boss Paints Bossflow Silk product detail page showing product information, pricing, and purchase options
Bossflow Silk product detail page showing product specifications, pricing, and purchase options

Take the Boss Paints product page above. A customer selects a size and hits “Voeg toe aan winkelmandje.” Simple enough for a human. But the moment you want an agent to handle that flow, search, filter, select, confirm, checkout, things get complicated fast if your site was not designed with that in mind.

With the imperative API, you register an add_to_cart tool. The agent reads what is available, understands it needs a product ID and a quantity, and executes. No button clicks. No DOM parsing. No guesswork.

The reference implementation in the early preview documentation uses add_to_cart as its primary example. That is not a coincidence. Commerce flows were clearly front of mind when this was designed.

navigator.modelContext.registerTool({
name: 'add_to_cart',
...
});

A complete commerce flow built with the imperative API might expose a sequence of tools across the full funnel: search, filter, get product detail, add to cart, apply a promo code, initiate checkout, confirm order. One agent, one structured conversation with your site, end to end..

One detail worth noting for developers: the readOnlyHint annotation lets you mark certain tools as safe to call without user confirmation, searching a catalogue, reading cart contents, while write operations like placing an order should require explicit confirmation. That kind of intentional design maps well onto how a responsible composable storefront would already be structured.

Two directions, one question

Here is something worth thinking carefully about, because it shapes how you approach this.

The future of web-based interfaces is genuinely unclear. But that it is heading towards something more conversational seems pretty hard to argue with. And there are two distinct directions that takes.

  1. 1

    On-site: conversational interfaces that sit within your own digital experience, guiding users through complex journeys in ways that go well beyond that chatbox in the bottom right corner that nobody asked for. Richer, more contextual, more useful.

  2. 2

    Off-site: agents like Claude, LeChat, or ChatGPT that interact with your platform entirely autonomously, on behalf of a user who may never visit your site in any traditional sense. They search, compare, transact, and report back. Your site becomes a service layer, not just a destination.

WebMCP is relevant to both. But the off-site direction is the one that demands a more fundamental rethink. If an agent is handling the journey, your content model, your API design, and your action architecture matter far more than your navigation or your hero image.

Why this sits high on our list

Content is central to everything we do at Two Point O. The way content is structured, modelled, and distributed determines how well a digital experience performs, for humans and, increasingly, for agents.

We have been building with this in mind for a while. Enabling markdown delivery for agent crawlers. Implementing structured data and schema markup. Designing content models as proper ontologies. Not everything has paid off equally, but the direction has been consistent.

WebMCP is the next logical step, and it fits naturally into how composable, headless architecture already works. Your content is already decoupled. Your APIs are already structured. Adding agent-friendly action declarations is not a rebuilding exercise. It is an extension of work that should already be underway.

We will share what we learn from the preview as it unfolds.

What this means for your digital front office

You do not need to act on WebMCP this week. It is an early preview. Adoption will take time, and the standard will evolve.

But if you are making architecture decisions now, about your content model, your API design, your frontend framework, it is worth asking whether those decisions leave room for this kind of instrumentation. Composable architecture does, almost by default. Tightly coupled, monolithic platforms do not.

The organisations that will adapt most easily are the ones already building with separation of concerns in mind. Content separate from presentation. Logic separate from layout. Data accessible via clean APIs.

If that describes your setup, you are closer to agent-ready than you might think. If it does not, that is worth knowing now rather than later.

Curious about what agent-ready architecture looks like for your digital front office? Let’s talk.

Book a call with us

FAQ

WebMCP enables your website to communicate directly with AI agents, allowing them to understand and interact with your content in a more structured and meaningful way. This capability is crucial for businesses aiming to provide seamless, conversational experiences, both on-site and off-site, through autonomous agents. By adopting WebMCP, you can future-proof your digital front office, ensuring it remains relevant and competitive as interactions with AI agents become more prevalent.

By enabling direct communication between your website and AI agents, WebMCP can significantly enhance the efficiency and effectiveness of interactions, leading to improved customer experiences and increased operational efficiency. This can translate into measurable business benefits, such as increased conversion rates, reduced bounce rates, and enhanced customer satisfaction. Moreover, by being an early adopter of WebMCP, you can differentiate your business and establish a competitive advantage in the market.

Implementing WebMCP requires careful consideration of your existing architecture and content model. However, for businesses already adopting composable, headless architecture, the integration can be relatively seamless. You will need to assess your current setup and determine the necessary adjustments to support WebMCP's declarative and imperative APIs. While it's an early preview, adoption will take time, and the standard will evolve, making it essential to stay informed and adapt your strategy accordingly.

WebMCP represents a significant step forward in enabling websites to interact with AI agents in a structured and meaningful way. By embracing this technology, your organization can pioneer new conversational experiences, both on-site and off-site, and establish itself as a leader in the digital landscape. This can lead to increased brand differentiation, customer loyalty, and ultimately, a competitive advantage in the market.

As interactions with AI agents become increasingly prevalent, WebMCP offers a crucial building block for future-ready digital experiences. By integrating WebMCP into your strategy, you can ensure that your digital front office remains adaptable and responsive to emerging trends and technologies. This forward-thinking approach will enable your organization to stay ahead of the curve and capitalize on new opportunities as they arise.

While the full impact of WebMCP is yet to be realized, early adopters can expect significant benefits in terms of operational efficiency, customer satisfaction, and conversion rates. To measure its effectiveness, you can track key performance indicators such as engagement metrics, conversion rates, and customer feedback. By monitoring these metrics, you can assess the ROI of WebMCP and make data-driven decisions to optimize its implementation and maximize its benefits.

Let's talk

Ready to transform your digital challenges?

Contact