GEO & AI Visibility

The Website That Acts

WebMCP turns your website from something AI reads into something AI uses.

For twenty-five years, every attempt to make the web work for machines has been a retrofit.

Meta tags. Schema markup. Structured data. Semantic HTML. Each one a patch — a way to help machines make sense of a format that was designed for humans reading documents. SEO was a retrofit. GEO is a retrofit. Even the most sophisticated AI optimization today is still the same thing: making a document format more readable for machines that would rather not read documents at all.

In February 2026, something different arrived. Not another layer on top of the old stack. A new capability in the stack itself — the first web standard designed for AI agents, not adapted for them.

It's called WebMCP. And it marks the beginning of the agentic web — a web where AI doesn't just read what you've published, but uses what you've built. A configurator an agent can invoke. A calculator it can run. A booking system it can operate. Not by scraping a page and guessing at form fields — through a structured interface the website explicitly offers and the agent explicitly understands.

The web was a library. It's becoming a workshop. Your website doesn't just need to be recommendable — it needs to be actionable.

Here's what that looks like in practice.

Your heat pump configurator has been on the website for three years.

It works. A visitor enters their building type, their square footage, their current heating system. The tool computes a recommendation. Sometimes they fill out the contact form. Sometimes they don't.

Now a buyer asks ChatGPT: "Which heat pump fits a 1970s detached house, 140 square meters, replacing oil heating?"

The AI finds your content page — the one that explains compatibility in general terms. It cites a paragraph. The buyer reads it, maybe clicks through, eventually finds the configurator, enters the same information they already gave the AI, and starts the whole process again.

The answer was computable. The tool existed. But the AI couldn't reach it.

It could read your website. It couldn't use it.

I

What AI Can See vs. What Your Website Can Do

The retrofit problem runs deeper than format.

GEO solved the content side — structure your pages so AI can extract claims, evaluate evidence, decide whether to cite you. That work matters and it continues to matter. But it only addresses half of what a website is.

The web isn't just content. It's full of capabilities — configurators, calculators, booking systems, comparison tools. Thousands of them, on thousands of websites, doing useful things for the humans who find them.

AI agents can't touch any of it.

They can read that your configurator exists. They can cite the page it sits on. They cannot fill in the fields, run the calculation, and return the result. The capability is right there. The interface between agent and capability is not.

We scanned 50 websites in the German heating industry. 33 had content pages explaining heat pump compatibility. 22 had interactive configurators that compute personalized recommendations. Zero exposed those configurators as structured tools an AI agent could discover and invoke.

Twenty-two companies built the capability. Not one made it visible to the machines that are increasingly deciding who gets recommended.

That's the gap. Not a content gap — a capability gap. The web's interactive layer is entirely invisible to AI.
II

The Mechanism

WebMCP draws on the same idea behind Anthropic's MCP — the Model Context Protocol, which gives AI structured access to backend systems. WebMCP applies that pattern to the open web: structured tool interfaces, but for any website, any agent.

The mechanism is simple enough to be surprising.

A website adds a handful of attributes to an existing HTML form — a tool name, a description of what the tool does, a description of what each input field means. The browser reads those attributes and translates the form into a structured tool interface. An agent that supports the standard can discover the tool, understand its purpose, pass the right parameters, and return the result to the user.

For human visitors, nothing changes. The form works exactly as before. But for AI agents, the page undergoes a quiet transformation: it goes from something they can read to something they can operate.

A content page becomes a service endpoint. A paragraph becomes a function call. A read-only web becomes an actionable one.

AI reads your page AI uses your tool
Static claim ("we're compatible") Verifiable in real-time (check now)
Cite → click → navigate → find tool → convert Cite → invoke → result

The standard is model-agnostic by design — not tied to any specific AI provider. And the optimization discipline it creates is entirely new.

III

Know, Search, Act

In "The Invisible Clients," we mapped two categories of queries that matter for AI visibility.

Know queries — the model already has the answer. Learned during training. It doesn't reach out. Your content is irrelevant.

Search queries — the model needs external information. It retrieves, evaluates, decides whether to cite you. This is where GEO lives.

WebMCP creates a third category.

The Act query. A query the model can't answer with text at all — because the answer requires execution. Not a paragraph. A function. Not information. Action.

The Know / Search / Act Framework
Know
From Training
No retrieval. Not addressable.
Act
New Territory
Triggers tool invocation.

Three query types. Three optimization disciplines. Most companies have started thinking about the first two. Almost none know the third exists.

The difference becomes obvious in practice.

"Which heat pump fits my 1970s house?" — Search. The AI retrieves information, evaluates sources, cites the best one.

"Check whether the MAHLE WP-X7 is compatible with my 1970s house, 140 square meters, replacing oil" — Act. The AI doesn't need a paragraph. It needs a function that takes parameters and returns a result.

"Book me a consultation with MAHLE for next Tuesday" — this was never in either category. It required a human picking up the phone. Now it's an Act query, addressable by any website that exposes its booking system as a tool.

Three shifts follow from this.

First: some Search queries become Act queries. When a tool can verify a claim in real-time, cached text loses. The model can cite a paragraph saying you're compatible — or it can invoke a tool that proves it. The provable answer wins.

Second: queries that required human action — booking, scheduling, requesting a quote — enter AI's reach for the first time. They were invisible. Now they're invocable.

Third — and this is the one most companies will miss: the addressable market for AI visibility just got significantly larger. You're no longer optimizing only to be the passage that gets cited. You're optimizing to be the action the agent takes.

IV

What Happens When Both Layers Exist

Go back to the heat pump page.

Today it has two things that don't talk to each other. The content — a well-structured explanation of which heat pumps work in older buildings, with efficiency comparisons, installation requirements, regulatory context. And the configurator — a form that takes your building type, your square footage, your current system, and returns a specific recommendation.

The content gets cited. The configurator gets used. Never in the same interaction. The AI reads the explanation and quotes it. The human, separately, finds the configurator and fills it out. Two workflows. Two contexts. A wall between them.

Now imagine both layers on the same page, visible to the same agent.

A buyer asks: "Which heat pump works for my 1970s detached house, 140 square meters, replacing oil?"

The agent finds the page. It reads the content first — learns about insulation constraints in pre-1980 buildings, capacity thresholds for oil replacement, efficiency variations by climate zone. This is the credibility layer. The agent now trusts this source.

Then it discovers the tool. It invokes the compatibility checker with the buyer's exact parameters — 1970, detached, 140sqm, oil. The tool returns three compatible models with efficiency ratings, estimated costs, and installation timelines for this specific house.

The agent delivers both: the context explaining why these models work, and the recommendation for this buyer's situation. One interaction. One source. The content gave the agent a reason to trust the tool. The tool gave the content a reason to exist.

This is what funnel collapse actually looks like. Not a shorter journey — no journey at all. The first touchpoint was the conversion event.

We call this a dual-layer page — optimized for both citation and invocation. The content layer serves Search queries. The tool layer serves Act queries. Together, they give the agent a complete workflow: understand the domain, then act within it.

Neither layer works as well alone. Content without capability is recommendable but can't convert. Capability without content gets ignored — the agent has no reason to trust a tool it can't contextualize. The companies that build both layers become both: recommendable and actionable. That's the position that changes the economics.

V

The New Meta Description

There's a catch. The standard requires tool descriptions — short texts that tell AI what a tool does, what parameters it accepts, what it returns.

This sounds trivial. It isn't.

When an AI agent encounters multiple tools that could serve a query, it selects based on description quality. The clearer name, the more precise description, the better-explained parameters — that's the tool the agent invokes. The other tool gets skipped. Not because the capability is worse. Because the description is.

This is the meta description problem of the agentic web.

Twenty years ago, the quality of your meta description determined whether a searcher clicked your result. A hundred characters of text, easily ignored, quietly decisive. Now, the quality of your tool description determines whether an AI agent invokes your capability. Same leverage. Different audience.

We call this discipline Tool Description Optimization — TDO. The practice of writing tool names, parameter descriptions, and return value descriptions that maximize the probability of agent selection. Precision over persuasion. Structured clarity over marketing language.

Consider what this means in practice.

"Check heat pump compatibility"
→ selected
"Compatibility tool"
→ skipped

The agent matches a verb in the user's intent to a verb in your tool name. No verb, no match.

"Returns compatible models with efficiency ratings, estimated annual costs, and installation timeline"
→ selected
"Returns results"
→ skipped

The agent picks the tool whose return value it can predict.

Parameter accepts "1970s" when user says "my 1970s house"
→ invoked
Parameter requires dropdown selection of predefined decades
→ skipped

Agents pass natural language, not form conventions.

Most companies will get this wrong in a familiar way. They'll write tool descriptions like marketing copy — vague, aspirational, full of superlatives. The AI does not care that your configurator is "world-class." It cares that the tool accepts a building year as an integer and returns compatibility ratings as a structured object.

Precise descriptions get selected. Vague descriptions get skipped. The quality of your capability is irrelevant if the description doesn't communicate it.
VI

Preparation Over Premature Implementation

The standard is early. Chrome 146 ships it behind a flag. Polyfills exist. No major AI agent has WebMCP tool discovery in production yet.

That's not a reason to wait. It's a reason to prepare.

Almost no website exposes structured tools today. The positions aren't unclaimed — the category barely exists. When agents gain tool discovery — and Google and Microsoft are co-authoring the spec — they'll reach for whatever's available. The websites with well-described, reliable tools will become defaults. The rest will be retrofitting while those defaults lock in.

There's a timing tension worth naming honestly. Full implementations are premature — the runtime isn't there. But the preparation isn't premature at all. Defining which capabilities to expose. Writing tool descriptions. Designing parameter schemas. Mapping Act queries to existing functionality. That work compounds regardless of when the standard stabilizes.

The parallel to early GEO is exact. The companies that structured their content for AI citation before citation mattered are the ones dominating citation results today. The same window is open for the agentic web — earlier in the cycle, with even less competition.

Close

Your website has configurators, calculators, booking systems. They work. Humans use them every day. AI can see none of it.

WebMCP changes this. The page that gets cited becomes the page that gets used. And the companies that master both — GEO for Search queries, TDO for Act queries — will hold the strongest position in the agentic web: recommendable for their expertise, actionable for their capabilities.

Your content makes you recommendable. Your tools make you actionable.

The document web was read-only. The agentic web is actionable.