agentgrade

Agent Readiness: how websites get ready for AI agents

Updated 2026-05-12 · by Eoin Siegel

Agent readiness is how prepared a website is to be used by autonomous AI agents — software that browses, queries, and transacts on a user’s behalf without rendering pages. A site is agent-ready when its capabilities (data, search, payments, tools) are exposed through machine-readable protocols rather than buried in JavaScript and clickflows. As of May 2026, the top 100 websites average a 55% agent readiness score; 99% fail basic content negotiation.

What is agent readiness?

An AI agent is software that uses an LLM to plan and execute actions on a user’s behalf — running searches, calling APIs, comparing prices, booking travel, filing forms. Agent readiness measures whether a website is built so that an agent can complete these tasks reliably.

A site that looks great in Chrome may still be useless to an agent. Agents don’t render JavaScript single-page apps by default. They don’t click through cookie banners. They don’t fill out CAPTCHAs. And they care about latency: a 12-second load time a human accepts is a hard timeout for an agent making decisions across dozens of sites.

Agent readiness shifts the question from “is my site usable?” to “is my site addressable?” — is there a structured surface an agent can call, or just a wall of HTML?

The shift to the agentic web

For two decades, the dominant traffic pattern on the web was: a human visits a page, scans it visually, clicks something. That assumption is baked into how sites are built — server-rendered or hydrated HTML, designed for eyes.

In 2025–2026 that pattern began breaking. OpenAI (ChatGPT Agent), Anthropic (Claude with computer use and MCP), Google (Gemini Agent, which absorbed Project Mariner in May 2026), and Perplexity (Comet) shipped agents that browse and act on a user’s behalf. Adobe Analytics measured a 4,700% year-over-year jump in generative-AI traffic to US shopping sites by July 2025, with a further 393% increase in Q1 2026. HUMAN Security’s 2026 State of Agentic Traffic report found AI agent and agentic-browser traffic up 7,851% year-over-year; autonomous bots still account for only 1.7% of AI-driven traffic. eMarketer forecasts AI platforms will process roughly 1.5% of US retail ecommerce in 2026 (~$20.9 billion, about 4× 2025), and Morgan Stanley projects $385 billion in agentic-commerce impact by 2030.

The agentic web is not a replacement for the human web — it is a parallel surface layered on top. Sites that expose that surface (via MCP, OpenAPI, x402, llms.txt, and related protocols) become reachable. Sites that don’t get bypassed in favor of those that do.

What makes a site agent-ready

Agent readiness breaks down into five categories of signals:

  1. Discovery — can an agent find your capabilities? Signals: /llms.txt, /skill.md, /sitemap.xml, agent-aware /robots.txt, HTTP Link headers pointing at service descriptions.
  2. Capability exposure — once found, can an agent call them? Signals: an MCP server at /mcp, an OpenAPI spec at /openapi.json, a WebMCP manifest, a SKILL.md with a callable description.
  3. Content negotiation — will the site respond in machine-readable form when asked? Signals: returning JSON when Accept: application/json is sent, returning markdown when Accept: text/markdown is sent, not gating agent User-Agents behind login walls.
  4. Payment protocols — can an agent transact? Signals: x402 (HTTP 402 with quoted prices), Stripe SPT, Coinbase Commerce, AP2, MPP — any standardized way for an agent to pay for a metered resource.
  5. Trust and identity — can an agent verify what it’s calling? Signals: Web Bot Auth headers, OAuth discovery at /.well-known/openid-configuration, MCP server cards, an API catalog at /.well-known/api-catalog.

The core protocols

Model Context Protocol (MCP)

MCP is a JSON-RPC 2.0 protocol that lets an AI agent discover and call tools on a remote server. A site exposing /mcp declares named tools (e.g., search_products, get_invoice), and an agent can call them programmatically. Originally specified by Anthropic in late 2024 and now supported by Claude, Cursor, Windsurf, and several other agent runtimes. Read the MCP guide.

x402 (HTTP 402 Payment Required)

x402 is the HTTP 402 standard for monetized API endpoints. A server returns 402 with a payment-required header that quotes a price and accepted payment networks. An agent (or its wallet) inspects the header, decides whether to pay, attaches a payment proof, and retries. x402 enables per-call micropayments without API keys. Coined by Coinbase; growing adoption across MCP-aware infrastructure. Read the x402 guide.

llms.txt

/llms.txt is a plain-text file at the root of a domain that points an LLM-driven agent at the most useful starting URLs on the site. It is the agent-era equivalent of robots.txt — but additive rather than restrictive. A well-formed llms.txt lists key pages with one-line summaries. Read the llms.txt guide.

Content negotiation (HTTP Accept)

The same URL can return different representations depending on what the client asks for. A site that serves HTML to browsers and machine-readable JSON or markdown to agents — keyed on the Accept header or a known agent User-Agent — is content-negotiating. It is not a new protocol; it is part of HTTP itself (RFC 9110). For agents it is one of the highest-leverage signals: returning text/markdown when an agent sends Accept: text/markdown lets the agent ingest your content directly without scraping HTML. Read the content-negotiation guide.

OpenAPI

OpenAPI 3.0+ specs at /openapi.json describe REST endpoints in a machine-readable schema. Agents use OpenAPI to understand what an API does without trial and error — operation IDs, parameters, response shapes, examples. Read the OpenAPI guide.

WebMCP

WebMCP is a manifest format at /.well-known/webmcp.json that exposes a site’s capabilities to MCP-aware browsers. It declares which tools the site supports without requiring a separate MCP server endpoint. Read the WebMCP guide.

A2A (Agent-to-Agent)

A2A is the protocol for agent-to-agent communication and delegation, published as a discoverable manifest at /.well-known/a2a.json. It lets one agent advertise capabilities to another and negotiate task handoffs. Read the A2A guide.

SKILL.md

SKILL.md is a markdown file with YAML frontmatter that describes a callable skill an agent can invoke. It is a cross-vendor standard supported by Claude Code, Cursor, and several other runtimes. The frontmatter declares the skill’s name, description, and arguments; the body is plain markdown instructions for the model. See an example.

Sitemap and agent-aware robots.txt

Older web protocols still matter. A /sitemap.xml (or one declared in /robots.txt via a Sitemap: directive) tells agents which pages to prioritize. A robots.txt with explicit allow/deny for agent User-Agents (GPTBot, ClaudeBot, Google-Extended, Perplexity-Bot) tells crawlers and agents which paths they can access.

How to measure agent readiness

agentgrade scans a domain across 70+ checks in these five categories and produces a 0–100 score and a letter grade. Each check returns pass/fail with a fix hint pointing at a Knowledge Base article. Run a scan at agentgrade.com or via the API at /api/scan?url=. There’s also a public leaderboard — The Agentic Web Index — that ranks the top 100 sites and refreshes regularly.

How to improve your score

The fastest wins, in order of leverage:

  1. Add /llms.txt — 65% of top-100 sites are missing this and it takes about 10 minutes.
  2. Expose an MCP server or publish an OpenAPI spec for your public API.
  3. Implement content negotiation for Accept: application/json and Accept: text/markdown.
  4. Link agent files from your homepage so they’re discoverable (100% of top-100 sites currently fail this).
  5. Add x402 quoted-price headers on metered endpoints.

The Knowledge Base has step-by-step guides for each. agentgrade’s scan report links each failing check directly to the relevant guide.

Frequently asked questions

What is agent readiness?

Agent readiness is how prepared a website is for autonomous AI agents — software that browses, queries, and transacts on a user’s behalf. It measures whether a site’s capabilities are exposed through machine-readable protocols rather than locked behind JavaScript and clickflows.

Why does agent readiness matter in 2026?

AI agents now route a measurable share of commercial traffic for major platforms. Adobe Analytics tracked a 4,700% year-over-year surge in generative-AI traffic to US shopping sites by July 2025, with a further 393% Q1 2026 increase. eMarketer forecasts AI platforms will process roughly 1.5% of US retail ecommerce in 2026 (~$20.9 billion, ~4× 2025), and Morgan Stanley projects $385 billion in agentic-commerce impact by 2030. Sites that aren’t agent-ready get bypassed in favor of sites that are.

What is the difference between SEO and agent readiness?

SEO optimizes for human search-engine traffic — ranking pages so people find and click them. Agent readiness optimizes for non-human callers — exposing data and capabilities so agents can use them programmatically. They share some signals (sitemap, robots.txt, structured data) but the goals differ: SEO wants visibility, agent readiness wants addressability.

Is agent readiness the same as AEO or GEO?

They overlap. AEO (Answer Engine Optimization) and GEO (Generative Engine Optimization) focus on getting cited inside LLM answers (ChatGPT, Perplexity, Google AI Overviews). Agent readiness is broader — it also covers agent-driven transactions, tool calls, and capability exposure. If you’re being cited by ChatGPT, that’s AEO; if an agent can complete a purchase on your site, that’s agent readiness.

How do I measure my site’s agent readiness?

Run a scan at agentgrade.com. It checks 70+ signals across discovery, capability exposure, content negotiation, payment protocols, and trust/identity. You get a 0–100 score, a letter grade, and per-check fix hints. The API is at /api/scan and there is also a CLI: npm i -g agentgrade-cli.

What is a typical agent readiness score?

The top 100 websites by traffic average 55% as of May 2026. Cursor leads at 82%. Major social platforms (Facebook, X, Instagram, LinkedIn, Reddit) score 0% because they block scanners or require auth. Grade distribution across the top-100: 0 A grades, 43 B grades, 35 C grades, 12 D grades, 10 F grades.

What is the single fastest way to improve my score?

Add /llms.txt — a plain-text file at your domain root listing key pages with one-line summaries. 65% of top-100 sites are still missing it, and it takes about 10 minutes to write a good one.

Will agent readiness affect my SEO ranking?

Indirectly, in some cases. Some of the same signals (sitemap, structured data, content negotiation) feed both. But agent readiness is not a Google ranking factor in the conventional sense. Its impact is on agent-driven referrals — LLM citations, agentic checkout, programmatic API discovery.

Is there a single open standard for agent readiness?

No. Agent readiness is a composite of many emerging protocols (MCP, x402, llms.txt, OpenAPI, A2A, WebMCP, SKILL.md). agentgrade’s scoring model weights each based on adoption, criticality, and whether the protocol can be verified at scan time.

Run a scan · Knowledge Base · Leaderboard · API spec