agentgrade

EnglishEspañol日本語中文
← Knowledge Base

How agentgrade scans your site

agentgrade is a diagnostic tool — like SSL Labs for agent-readiness. When someone enters a domain, we send a small set of targeted HTTP requests to check what agent-facing capabilities the site exposes.

What we check

What we don't do

Data we store

We store scan metadata only: the score, which capabilities were found, and which payment protocols were detected. We do not store page content, response bodies, or any sensitive data from the scanned site.

Rate limits

Our User-Agent

All requests from agentgrade identify themselves with:

agentgrade/0.2 (+https://agentgrade.com/kb/about-scanning)

Does agentgrade respect robots.txt?

robots.txt is a standard for search engine crawlers that systematically discover and index content. agentgrade is a site auditor, not a crawler — it doesn't index or store your content. We read your robots.txt to grade it, not to obey crawling directives. This is the same approach used by Lighthouse, SecurityHeaders.com, and SSL Labs.

How to opt out

You can block agentgrade by filtering the User-Agent agentgrade in your firewall or WAF. However, since agentgrade only checks machine-facing configuration that you've intentionally made public, blocking it means you lose visibility into how agents perceive your site.

Questions?

If you have questions about how agentgrade scans your site, open an issue on GitHub.