## What is "JavaScript-rendered"?

A page is JavaScript-rendered when the HTML the server sends is essentially empty — the actual content (headings, prose, product information) only appears after the browser downloads and runs a JavaScript bundle that mounts a framework like React, Vue, Svelte, or Angular into an empty root element.

A typical JS-rendered homepage looks like this on the wire:

```html
<body>
  <div id="root"></div>
  <script type="module" src="/assets/index-D9LVtTP6.js"></script>
</body>
```

The visible `<body>` content is zero bytes. Everything you see in a browser was constructed client-side after the JS ran.

## Why this matters for agents

Most AI crawlers do not execute JavaScript. When ChatGPT, Claude, Perplexity, Gemini fetches, or a typical agent framework hits your URL, they get the raw HTML — and your homepage is blank. Your product description, headings, links, and prose are invisible to them.

This is a real harm for retrieval and summarization. An LLM asked "what does this company do?" while citing your URL will produce a poor answer or hallucinate, because there is nothing on the page to read.

## What this does *not* break

Tags emitted into the static `<head>` of your HTML are unaffected by client-side rendering. Even a fully JS-rendered SPA can pass these checks if the build tool populates the head correctly:

- JSON-LD structured data (`<script type="application/ld+json">`)
- OpenGraph (`og:image`, `og:title`, etc.)
- Twitter Card metadata
- Canonical URL
- Favicons and `<link rel="alternate">` tags

If your site is JS-rendered and these checks are *also* failing, those are independent gaps — fix them by adding the right `<head>` tags to your build's HTML template, regardless of whether you fix the rendering itself.

## The shared root cause: static-hosted SPA bundles

JS-rendered sites usually ship as a static bundle (an `index.html` and a folder of JS/CSS) deployed to a host that serves `index.html` for any GET request: Vercel static, Netlify, Cloudflare Pages, GitHub Pages, S3 + CloudFront, Fastly Frontend, Firebase Hosting.

That deployment pattern has no server-side per-request logic at all. The same configuration that produces a JS-rendered body **also** produces:

- Same HTML returned for every `Accept` header (no content negotiation)
- HTML 200 for unknown paths instead of structured JSON 404s
- No `Vary`, `Cache-Control`, or rate-limit headers tuned for agents
- No way to vary the response by User-Agent

Fixing one symptom (e.g. content negotiation) without addressing the deployment model is usually impossible — you need either server-side logic or an edge function in front of the static bundle.

## Fix ladder

Pick the cheapest option that solves your case.

### 1. Prerender at build time

If your homepage content is largely static, prerender it during your build. The browser still hydrates with React/Vue/etc., but the initial HTML the server sends already contains the rendered DOM. Crawlers see real content.

- **Vite + react-snap** or **vite-plugin-prerender**
- **Astro** (`output: 'static'`) — JS-rendered components are server-rendered to HTML at build time
- **Next.js** with `output: 'export'` or `getStaticProps`
- **Nuxt** with `nuxt generate`

This is the lowest-effort fix for marketing pages, blogs, and documentation.

### 2. Full server-side rendering (SSR)

If the homepage needs dynamic data (logged-in state, A/B tests, fresh content), use a framework that renders on every request:

- **Next.js** (default App Router with React Server Components)
- **Remix**
- **SvelteKit**
- **Nuxt** SSR mode

SSR requires a runtime (Node, Bun, Deno, or an edge runtime), which means leaving pure static hosting.

### 3. Edge function for content negotiation

If migrating to SSR is too disruptive, put an edge function in front of your static bundle that intercepts requests and serves an alternative representation:

```javascript
// Cloudflare Worker / Vercel Edge / Netlify Edge
export default async function (request) {
  const accept = request.headers.get('accept') || '';
  const ua = (request.headers.get('user-agent') || '').toLowerCase();
  const isAgent = /claudebot|gptbot|chatgpt-user|perplexitybot|google-extended/.test(ua);

  if (isAgent || accept.includes('text/markdown')) {
    return new Response(await fetchMarkdownSummary(), {
      headers: { 'Content-Type': 'text/markdown', 'Vary': 'Accept, User-Agent' },
    });
  }

  return fetch(request); // pass through to static bundle
}
```

This is the right fix when the underlying app must stay client-rendered (e.g. a heavy interactive app), but you still want agents to read meaningful content.

### 4. `<noscript>` fallback (minimum viable)

At minimum, include your key content inside a `<noscript>` block in the static HTML:

```html
<noscript>
  <h1>Your Company</h1>
  <p>What you do, in two sentences. Link to docs, pricing, API.</p>
  <a href="/about">About</a> · <a href="/docs">Docs</a> · <a href="/pricing">Pricing</a>
</noscript>
```

This is not a substitute for prerendering, but it stops crawlers from seeing a completely empty page. Most JS-rendered sites do not have this.

## How to verify

Fetch your site without executing JavaScript and check whether there is real content in the body:

```bash
curl -s -A "ClaudeBot/1.0" https://yourdomain.com/ | \
  python3 -c "import sys, re; t = sys.stdin.read(); body = re.search(r'<body[^>]*>(.*?)</body>', t, re.S); print(len(re.sub(r'<[^>]+>', '', body.group(1) if body else '').strip()))"
```

If the printed number is under a few hundred characters, agents are not seeing your content.

## Learn more

- [Google: Rendering on the Web](https://web.dev/articles/rendering-on-the-web)
- [Vercel: Static vs. SSR vs. ISR](https://vercel.com/docs/frameworks/nextjs#rendering-strategies)
- [Astro Islands](https://docs.astro.build/en/concepts/islands/)

## Related

- [WebMCP](/kb/webmcp)
- [llms.txt](/kb/llms-txt)
