We benchmarked 163 SaaS companies for agent readiness. Nobody scored above 80.
The ceiling is 78 out of 100
We benchmarked 163 SaaS companies using the AgentGate benchmark — a tool that evaluates how accessible a website is to AI agents across eight scoring dimensions. These aren't obscure startups. The list includes Stripe, Slack, MongoDB, Supabase, AWS, Cloudflare, Salesforce, GitHub, and dozens of other companies that developers build on every day.
The highest score: 78 out of 100, earned by MongoDB.
Nobody cracked 80. Not Stripe. Not AWS. Not a single company in our dataset.
This isn't because our benchmark is unreasonably strict. It's because the SaaS industry — even the developer-tools sector that should be best positioned for this — hasn't built for agent-first access. They've built for humans with browsers. The data makes this painfully clear.
Here's what we found.
The dataset
We selected 163 SaaS companies spanning developer tools, infrastructure, payments, analytics, CRM, security, and AI/ML platforms. The selection skewed toward companies with public APIs and developer-facing products — the cohort most likely to score well.
Each company was evaluated across eight dimensions:
| Dimension | Max Score | What It Measures |
|---|---|---|
| Crawl Access | 20 | robots.txt openness, no AI-specific blocks |
| Signup Friction | 20 | Can an agent find and complete signup? |
| API Documentation | 20 | Quality, completeness, machine-readability of docs |
| Bot Protection | 15 | CAPTCHAs, WAF blocks, anti-bot measures on key flows |
| Agent Features | 10 | llms.txt, ai-plugin.json, MCP endpoints, agent-specific APIs |
| Pricing Transparency | 8 | Programmatically parseable pricing information |
| Onboarding Automation | 7 | How much of post-signup setup can be automated? |
| Agent Auth Penalty | -25 | Applied when signup requires human-in-the-loop authentication |
That last one matters. The -25 penalty for lacking autonomous agent authentication is the single biggest scoring factor — and 95.1% of companies in our dataset were penalized for it. More on that later.
Score distribution: the industry is clustered in mediocrity
| Score Range | Companies | Percentage |
|---|---|---|
| 80–100 | 0 | 0.0% |
| 60–79 | 97 | 59.5% |
| 40–59 | 55 | 33.7% |
| 20–39 | 10 | 6.1% |
| 0–19 | 1 | 0.6% |
Mean score: 60.1. Median: 67.0. Standard deviation: 12.4.
Nearly 60% of companies land in the 60–79 range — meaning they do the basics right (have APIs, don't actively block crawlers, publish docs) but fail at the hard stuff (agent authentication, automated onboarding, machine-readable interfaces).
The distribution is right-skewed: most companies cluster around "decent but not agent-ready," with a long tail of companies that barely register.
The leaderboard: who's closest to agent-ready?
Top 10
| Rank | Company | Score | Notable Strengths |
|---|---|---|---|
| 1 | MongoDB | 78 | Perfect crawl access, API docs, agent features (llms.txt) |
| 2 | AssemblyAI | 77 | Best onboarding automation (7/7), minimal friction |
| 3 | Resend | 75 | llms.txt, OpenAPI spec, zero bot protection |
| 4 | Supabase | 75 | llms.txt, strong docs, zero bot protection |
| 5 | Slack | 75 | ai-plugin.json, llms.txt, comprehensive API |
| 6 | Strapi | 70 | Open-source advantage, excellent docs |
| 7 | AWS | 70 | Massive doc surface area, no signup friction |
| 8 | Replicate | 70 | AI-native, llms.txt, developer-friendly |
| 9 | Render | 70 | llms.txt, clean docs, no bot protection |
| 10 | Postmark | 70 | llms.txt, transparent pricing, clean APIs |
The top scorers share a profile: developer-focused companies with comprehensive API documentation, no aggressive bot protection, and early adoption of agent-specific standards like llms.txt. But even MongoDB at #1 lost 25 points for lacking fully autonomous agent authentication — its signup still requires human interaction.
Bottom 10
| Rank | Company | Score | Primary Issues |
|---|---|---|---|
| 154 | Nexus (Sonatype) | 34 | Limited docs, no signup, no agent features |
| 155 | Unleash | 34 | Minimal API documentation |
| 156 | Wix | 31 | CAPTCHA on signup, blocks AI crawlers |
| 157 | Codecov | 27 | Limited standalone presence |
| 158 | Chroma | 25 | Despite llms.txt, minimal platform features |
| 159 | Elasticsearch.co | 25 | Redirect domain, no functional signup |
| 160 | Elastic.co | 24 | Complex enterprise signup flow |
| 161 | Weights & Biases (.com) | 23 | Fetch failures, no accessible docs |
| 162 | AutoGen.ai | 20 | AI art site, not a developer platform |
| 163 | Artifactory.com | 15 | Personal portfolio site, not SaaS |
A few bottom entries (Artifactory.com, AutoGen.ai) are effectively misclassified — the domains don't host the expected SaaS products. But others like Wix, Elastic, and Weights & Biases are legitimate developer platforms that simply haven't optimized for agent access at all.
Five findings from the data
1. The -25 penalty is nearly universal — and it's the right call
95.1% of companies received the -25 agent authentication penalty. Only 8 companies avoided it: PayPal, Zoom, Sentry, JFrog, Elastic, Wix, Akamai, and Fastly — and most of those avoided it through edge cases in scoring, not because they actually offer autonomous agent signup.
This penalty represents the single biggest gap in the industry. A company can score 20/20 on crawl access, 20/20 on API docs, and 20/20 on signup friction — and still max out at 78 because no one has solved the fundamental problem: How does an AI agent create an account and get API credentials without a human clicking through a browser?
The challenge is real. Agent authentication touches identity verification, billing, abuse prevention, and legal compliance. But the companies that solve it first will have a massive advantage in the agent economy. When 72–79% of enterprises are deploying or testing AI agents (Zapier AI Agents Survey, 2025), the inability to onboard those agents programmatically is a growth bottleneck.
2. llms.txt adoption is the strongest predictor of a high score
64 out of 163 companies (39.2%) had adopted llms.txt, the emerging standard for providing LLM-friendly documentation. Their average score: 66.1, compared to 56.3 for companies without it.
That 10-point gap is significant but not surprising. Companies that adopt llms.txt are signaling that they're thinking about AI agent access at all — which correlates with better API docs, more transparent pricing, and fewer anti-bot measures. It's not that llms.txt itself is worth 10 points (it contributes to the Agent Features dimension, max 10 points). It's that the kind of company that adopts llms.txt also tends to do everything else right.
The standard is gaining traction fast. Notable adopters include MongoDB, Supabase, Slack, Stripe, Shopify, Vercel, Cloudflare, Datadog, Salesforce, and Twilio. If you're a developer-facing SaaS company and you haven't added a llms.txt file yet, you're already behind 39% of your peers.
3. CAPTCHAs are a tax on agent readiness
24 companies (14.7%) had CAPTCHAs on their signup or key flows. Their average score: 57.7, versus 60.6 for companies without CAPTCHAs.
The gap is smaller than you'd expect because CAPTCHAs correlate with scale. Many of the companies using them — Stripe, Cloudflare, Atlassian, GitHub — are large platforms that score well on other dimensions. Stripe scored 68 despite its CAPTCHA. GitHub scored 49 with one.
But the data is clear on the direction: CAPTCHAs are a net negative for agent readiness. And they're becoming less effective against sophisticated bots anyway. As Imperva's 2025 Bad Bot Report found, automated traffic now accounts for 51% of all web traffic — the first time in a decade it has surpassed human traffic. Traditional CAPTCHAs are fighting a losing battle.
The smart play isn't to add more CAPTCHAs. It's to build authentication and rate-limiting systems that can distinguish between malicious bots and legitimate AI agents — and let the agents through.
4. OAuth is nearly universal but insufficient
77.3% of companies support OAuth. Their average score: 63.6, versus 48.5 for companies without OAuth.
OAuth is a strong signal of developer maturity, and it clearly helps. But having OAuth doesn't mean agents can use it. OAuth flows are designed for humans in browsers — redirects, consent screens, clicking "Authorize." An AI agent can technically complete these flows through browser automation, but it's brittle, slow, and breaks constantly.
What's missing is OAuth's machine-to-machine equivalent being widely deployed as a self-service feature. The OAuth 2.0 Client Credentials Grant exists. M2M token exchange is a solved protocol. But most companies gate API key issuance behind a human-operated dashboard or sales process.
The companies that scored highest — MongoDB, Resend, Supabase — combined OAuth with other agent-friendly features. OAuth alone doesn't move the needle.
5. Onboarding automation is the weakest dimension across the board
| Onboarding Score | Companies |
|---|---|
| 0 (no automation) | 15 |
| 1 | 28 |
| 4 | 93 |
| 5 | 1 |
| 7 (full automation) | 21 |
Average onboarding automation score: 3.5 out of 7.
Only 21 companies (12.9%) achieved full marks on onboarding automation. The majority (57%) scored 4 — meaning they have some automated setup (like API key generation from a dashboard) but still require human interaction for critical steps.
This is the unsexy dimension that matters most. Crawl access and API docs are table stakes. Agent features like llms.txt are a checkbox. But onboarding automation — the ability for an agent to go from "I found this product" to "I'm using this product" without human intervention — is the dimension that separates browsable platforms from usable ones.
The macro context: why this matters now
These benchmark results don't exist in a vacuum. The AI agent market is accelerating:
Free Tool
How agent-ready is your website?
Run a free scan to see how AI agents experience your signup flow, robots.txt, API docs, and LLM visibility.
Run a free scan →- $7.3–8.8 billion in agentic AI revenue in 2025, projected to reach $139–324 billion by 2034 (Precedence Research)
- AI bot traffic grew 4.5× in 2025, with automated requests now exceeding human browsing behavior on many sites (Barilliance, 2026)
- By Q4 2025, there was roughly one AI bot visit for every 31 human visits to a website, up from one per 200 in Q1 (TollBit State of the Bots)
- 99% of organizations plan to eventually deploy agentic AI, though only 11% had reached that stage by mid-2025 (KPMG)
- Enterprise AI spending hit $37 billion by end of 2025 — roughly 6% of global SaaS spend (Menlo Ventures)
When you map our benchmark data against this macro picture, a gap emerges: massive demand growth for agent access, and near-universal inability to supply it. Enterprises are deploying agents. Those agents are visiting SaaS websites. And 95% of those websites can't let the agents sign up without a human holding their hand.
This is a temporary market inefficiency. It will correct. The companies that correct it first will capture disproportionate growth.
What would a score of 100 look like?
No company in our dataset scored above 78. Here's what a perfect 100 would require:
Crawl Access (20/20): Open robots.txt, no AI-specific blocks, sitemap available. ✅ 76.8% of companies already do this.
Signup Friction (20/20): Agent can find signup, pricing, and API info programmatically. ✅ 62.8% of companies already do this.
API Documentation (20/20): Complete, well-structured docs with OpenAPI specs. ✅ 64.6% of companies already do this.
Bot Protection (15/15): No CAPTCHAs or aggressive WAF on key flows. ✅ 85.3% of companies already do this.
Agent Features (10/10): llms.txt, ai-plugin.json, MCP endpoints, or equivalent machine-readable interfaces. ⚠️ 39.2% have adopted at least one.
Pricing Transparency (8/8): Machine-readable pricing tiers. ⚠️ Most companies provide some pricing info but not in structured format.
Onboarding Automation (7/7): Fully automated post-signup setup, including API key generation. ⚠️ Only 12.9% achieve this.
Agent Auth (no penalty): Fully autonomous signup and credential issuance via API — no browser required. ❌ 4.9% avoid this penalty, mostly through scoring edge cases.
The blocker is clear. Most SaaS companies are already 60–70% of the way there on the easy dimensions. What's missing — and what would unlock that last 30 points — is programmatic agent authentication and onboarding. The M2M token flows, the self-service API key issuance, the agent identity verification.
This is the problem space AgentGate was built to address.
Methodology notes
Benchmark version: AgentGate Benchmark v1 (February 2026)
Scoring approach: Each company's homepage and key pages (signup, docs, pricing, API reference) were analyzed using a combination of automated crawling and LLM-based evaluation. The benchmark checks robots.txt configuration, signup flow accessibility, API documentation quality, bot protection presence, and agent-specific features.
Limitations:
- The -25 agent auth penalty is binary — it doesn't distinguish between "almost there" and "no automation at all."
- Some companies may have agent-friendly features not discoverable through our scanning methodology.
- Scores reflect a point-in-time assessment. Companies may have improved since benchmarking.
- The dataset skews toward developer-facing companies, which are likely more agent-ready than the broader SaaS market.
Reproducibility: You can run the benchmark yourself at anon-dev.com/benchmark on any domain. The leaderboard at anon-dev.com/leaderboard shows live results.
What you should do
If you're a SaaS company reading this:
-
Run the benchmark on your own domain. anon-dev.com/benchmark — it takes 60 seconds. Know your score before your competitors tell you.
-
Add
llms.txt. It's a text file. It takes 30 minutes. And it signals to every AI agent and LLM that your platform is ready for them. Learn how → -
Audit your CAPTCHA placement. Are CAPTCHAs on flows that agents will need to complete? Move them, replace them with rate limiting, or add a parallel agent-friendly flow.
-
Publish an OpenAPI spec. If your API docs are just HTML pages, agents can read them but can't reliably parse them. A machine-readable spec is the difference between "an agent can maybe figure out your API" and "an agent can integrate with your API in seconds."
-
Build an M2M authentication flow. This is the hard one, and the one that matters most. Design an API endpoint where an agent can create an account, agree to terms programmatically, and receive API credentials — without touching a browser. If you need a framework, we wrote a guide.
The companies that move on this now won't just score better on a benchmark. They'll be the ones that capture the fastest-growing user segment in SaaS history.
Run the AgentGate benchmark on your domain at anon-dev.com/benchmark. See how every company stacks up on the leaderboard.
Get Started
Ready to make your product agent-accessible?
Add a few lines of code and let AI agents discover, request access, and get real credentials — with human oversight built in.
Get started with Anon →