## What is MCP?

MCP (Model Context Protocol) is the open standard for connecting AI assistants to external tools, data sources, and services. Your server exposes "tools" — named functions with descriptions and input schemas — and the AI model decides when to call them mid-conversation. Created by Anthropic in late 2024 and now supported by Claude, ChatGPT, IDE plugins, and a growing list of agent frameworks, MCP plays the same role for AI that USB plays for hardware: one cable, any device.

## Why MCP matters

Before MCP, every AI assistant connected to every external service through bespoke "plugin" or "function calling" code. Each AI vendor had its own format. Each tool integration had to be re-written per vendor — once for Claude, again for ChatGPT, again for Cursor, again for a custom agent.

MCP standardizes the interface. A single MCP server works with every MCP-compatible client. That means:

- Tool builders write the integration **once** instead of N times
- AI clients pick up new tools the moment they're published, no client update required
- Agents can discover, list, and call tools at runtime — no compile-time wiring
- Open-source servers can be audited, forked, and self-hosted

It's the difference between proprietary printer drivers and the IP standard. The protocol is boring. Standardizing on it unlocks an ecosystem.

## How it works

All communication uses **JSON-RPC 2.0** over HTTP POST to a single endpoint (e.g., `/mcp`):

1. Client sends `initialize` to establish the session
2. Server responds with its name, version, and capabilities
3. Client sends `tools/list` to discover available tools
4. Server returns a list of tools with descriptions and input schemas
5. Client sends `tools/call` with a tool name and arguments
6. Server executes the tool and returns the result

## MCP vs alternatives

| Aspect | OpenAI function calling | [OpenAPI](/kb/openapi) / Swagger | MCP |
|---|---|---|---|
| Vendor scope | Single vendor | Vendor-neutral | Vendor-neutral |
| Designed for | LLM tool calls | REST APIs | AI agents |
| Discovery | Hardcoded in client | Static spec file | Runtime via `tools/list` |
| Streaming | Limited | None native | Built-in (SSE / Streamable HTTP) |
| Stateful sessions | No | No | Yes (initialize → calls) |
| Auth | Client-side keys | API keys, OAuth | OAuth, bearer, custom |
| Ecosystem | OpenAI only | Mature, broad | New, growing fast |

OpenAPI describes a REST API at design time. MCP describes a *live* tool surface at runtime. The two are complementary — many MCP servers wrap existing OpenAPI services and expose them as agent-callable tools.

## Implementation example

```javascript
app.post('/mcp', (req, res) => {
  const { method, params, id } = req.body;

  if (method === 'initialize') {
    return res.json({ jsonrpc: '2.0', id, result: {
      protocolVersion: '2024-11-05',
      capabilities: { tools: {} },
      serverInfo: { name: 'my-service', version: '1.0' }
    }});
  }

  if (method === 'tools/list') {
    return res.json({ jsonrpc: '2.0', id, result: {
      tools: [{
        name: 'search',
        description: 'Search for items',
        inputSchema: {
          type: 'object',
          properties: { query: { type: 'string' } },
          required: ['query']
        }
      }]
    }});
  }

  if (method === 'tools/call') {
    const { name, arguments: args } = params;
    return res.json({ jsonrpc: '2.0', id, result: {
      content: [{ type: 'text', text: 'Search results...' }]
    }});
  }

  res.json({ jsonrpc: '2.0', id, error: { code: -32601, message: 'Method not found' }});
});
```

## Transport options

MCP supports three transports — pick based on where your client runs:

- **Streamable HTTP** *(recommended for remote servers)*: Plain HTTP POST to one endpoint. Server returns either a JSON response or an SSE stream for tools that emit progress. Works through CDNs, load balancers, and firewalls. This is what `claude.ai` web and most hosted clients expect.
- **SSE (Server-Sent Events)**: Older transport using a long-lived GET for server→client and POST for client→server. Still supported but Streamable HTTP supersedes it for new servers.
- **stdio**: Process pipes — the client launches your server as a subprocess and communicates via stdin/stdout. Used by Claude Desktop and CLI agents. Zero network, zero CORS, but the client must be able to run your code locally.

If you want one MCP server to work with Claude Desktop *and* the Claude web app *and* ChatGPT *and* Cursor, ship Streamable HTTP. Almost everyone speaks it now.

## CORS — required for browser-based agents

A growing class of MCP clients runs *in the browser*: Claude.ai web, ChatGPT browser, browser extensions, and embedded chat widgets. When JavaScript running on `claude.ai` calls `fetch('https://yourdomain.com/mcp')`, the browser does two things on its own:

1. It actually sends the request to your server.
2. It refuses to hand the response back to the JavaScript that asked for it — *unless* your server's response includes an `Access-Control-Allow-Origin` header that names `claude.ai` (or `*`).

So without CORS headers, your server runs fine and returns 200, but the MCP client sitting in the browser tab sees a network error and can never read the body. This is the browser protecting users from one site silently making authenticated requests to another. Server-side clients like Claude Desktop or the MCP CLI aren't subject to this — there's no browser policing the response — which is why an endpoint can work in Desktop but fail in the web.

To support browser-based MCP clients, send these headers on every `/mcp` response, and handle the OPTIONS preflight:

```javascript
app.use('/mcp', (req, res, next) => {
  res.set('Access-Control-Allow-Origin', '*');
  res.set('Access-Control-Allow-Methods', 'GET, POST, OPTIONS');
  res.set('Access-Control-Allow-Headers', 'Content-Type, Accept, Authorization');
  if (req.method === 'OPTIONS') return res.sendStatus(204);
  next();
});
```

### Wildcard vs specific origin

`Access-Control-Allow-Origin: *` is fine for **public, read-only** MCP endpoints. If your MCP endpoint requires authentication or charges for tool calls, return a specific origin (or echo back the request's `Origin` header against an allowlist) and set `Access-Control-Allow-Credentials: true`. Wildcard origins cannot be used together with credentials per the CORS spec.

### Why preflight matters

For a "simple" request — a plain `GET` or a `POST` with `Content-Type: text/plain` — the browser just sends it and only the response gets blocked. But MCP uses `POST` with `Content-Type: application/json`, which the browser considers risky enough to *check first*. So before sending the actual POST, the browser sends an `OPTIONS` request asking your server: "Will you accept a POST from this origin with these headers?"

If your server doesn't respond to the OPTIONS with a 2xx status and the right CORS headers, the browser cancels the POST entirely — it never reaches your server. That's why CORS middleware has to handle both: the OPTIONS preflight, *and* the actual POST.

## Authentication

For public read-only servers, no auth is fine. For private or paid tools, MCP supports:

- **Bearer tokens** in the `Authorization` header (simplest)
- **OAuth 2.1** with PKCE — the standard for user-delegated access; the MCP spec defines a metadata endpoint at `/.well-known/oauth-authorization-server` so clients can discover your OAuth flow automatically
- **Per-request payment** via [x402](/kb/x402) — return HTTP 402 with payment instructions instead of authenticating in advance

OAuth is the right default if humans are authorizing on behalf of themselves. Bearer tokens are right for service-to-service. x402 is right when calls are paid and stateless.

## Who's using MCP

- **Anthropic Claude** — native MCP client in Claude Desktop, Claude.ai web, and the Claude API
- **OpenAI ChatGPT** — added MCP support in 2025
- **IDE integrations** — Cursor, Zed, Windsurf, and JetBrains plugins all speak MCP
- **Agent frameworks** — LangChain, LlamaIndex, and most agent SDKs include MCP transports
- **Official MCP Registry** at `registry.modelcontextprotocol.io` lists hundreds of public servers
- **Smithery** at `smithery.ai` is the largest community-maintained MCP marketplace

agentgrade publishes its own MCP server (`com.agentgrade/mcp-server`) for agents that want to scan sites from inside an MCP-aware client.

## Common errors and debugging

- **`Method not found`** (-32601): The client called a method your server doesn't implement. Always return a graceful error for unknown methods rather than throwing.
- **CORS preflight fails**: OPTIONS returns 404 or 405. Add the middleware shown above — your server must handle OPTIONS as well as POST.
- **Tools list is empty in the client UI**: Your `tools/list` response is well-formed but the client expects a non-empty `tools` array. Confirm at least one tool is registered.
- **`Invalid params`** (-32602): The arguments sent by the client don't match your `inputSchema`. Look at the JSON Schema — required fields missing or wrong types are the usual culprits.
- **Works in Desktop, fails in the web**: Almost always CORS. The browser blocks the response; server logs show 200s but the client sees nothing. Open the browser devtools network tab to confirm.

## Key concepts

- **Tools**: Functions your server exposes (e.g., "search", "create_invoice")
- **inputSchema**: JSON Schema describing what arguments each tool accepts
- **Resources**: Read-only data your server can hand to the model (files, snippets, query results)
- **Prompts**: Reusable prompt templates the server publishes for the client to use
- **Streamable HTTP**: The recommended transport — all messages via POST to one endpoint
- **SSE**: Server-Sent Events — older transport, kept for backward compatibility

## FAQ

### Is MCP an Anthropic protocol or an open standard?

It was created by Anthropic and is published as an open specification. The reference implementations are MIT-licensed. OpenAI, IDE vendors, and agent frameworks have adopted it independently of Anthropic — it is a community standard now, not a single-vendor protocol.

### How is MCP different from OpenAI's function calling?

Function calling is a feature of one vendor's API. MCP is a transport-level protocol any client can speak. With function calling, the model and tool definition live inside the same OpenAI request; with MCP, the model lives at the client and the tools live at your server, and they communicate over a defined wire format.

### Should I expose my existing REST API as MCP or just give the AI my OpenAPI spec?

Both work, but they target different clients. Models with native MCP support (Claude, Cursor) call tools directly via MCP. Models that read OpenAPI specs need the spec rendered into their prompt or function-call format. If you only have time for one, ship MCP — most agent clients prefer it for runtime discovery. If you already have an OpenAPI spec, wrapping it in an MCP server is a few hundred lines.

### Does MCP require WebSockets?

No. The current standard is Streamable HTTP — plain POST to one endpoint, with optional SSE streaming inside. Earlier drafts used WebSockets; that path was abandoned because plain HTTP traverses CDNs, load balancers, and corporate firewalls without configuration.

### Can MCP servers charge for tool calls?

Yes. The protocol doesn't include payment, but you can layer x402 on top: have your server return HTTP 402 from `tools/call` until the agent attaches a valid payment receipt. This is one of the most active patterns in the agent ecosystem.

### How do I list my MCP server in the registry?

Submit a manifest to [registry.modelcontextprotocol.io](https://registry.modelcontextprotocol.io). Smithery.ai mirrors the registry and adds discovery features. Listing is free and the review queue is short.

### Why is my MCP server slow?

The protocol itself is cheap — JSON-RPC over HTTP. Slowness is almost always (a) the tool implementation, or (b) a cold serverless start. Profile the `tools/call` handler before blaming the transport.

### Do I need a separate server per tool, or one server with many tools?

One server with many tools is the norm. Group tools by logical service (e.g., all GitHub operations in one server). Clients can connect to multiple servers simultaneously, so splitting only makes sense when permissioning or hosting needs differ.

## Spec maturity

**Formal specification.** MCP is maintained by Anthropic with a versioned spec. Current protocol version: `2024-11-05`. Supported natively by Claude, ChatGPT, Cursor, and most agent frameworks. The Streamable HTTP transport is stable and used by every major client.

## Learn more

- [modelcontextprotocol.io](https://modelcontextprotocol.io) — Official specification
- [MCP TypeScript SDK](https://github.com/modelcontextprotocol/typescript-sdk) — Reference implementation
- [Official MCP Registry](https://registry.modelcontextprotocol.io) — Public catalog of MCP servers
- [Smithery](https://smithery.ai) — Largest community marketplace
- [MDN: CORS](https://developer.mozilla.org/en-US/docs/Web/HTTP/CORS) — Cross-Origin Resource Sharing reference
- [Agent readiness](/agent-readiness) — How MCP fits into the broader agentic web landscape

## Related

- [SKILL.md](/kb/skills)
- [A2A](/kb/a2a)
- [WebMCP](/kb/webmcp)
- [llms.txt](/kb/llms-txt)
