← Back to Blog
Compliance9 min readFebruary 26, 2026

Why agent signups constitute ToS violations — and what to do about it

A
Anon Team

The Uncomfortable Truth About Agent Account Creation

Every major SaaS platform has a clause in its Terms of Service that prohibits automated account creation. It's standard boilerplate — designed to prevent spam, fraud, and abuse. But in 2026, those same clauses are catching something they were never designed to block: legitimate AI agents acting on behalf of paying customers.

Here's a typical clause from a well-known SaaS provider:

"You may not use automated means, including bots, scripts, or web scraping tools, to create accounts, access the service, or collect data from the service."

When a coding agent like Claude Code or Devin needs to sign up for a service to complete a task, it has two options: ask a human to do it manually (slow, defeats the purpose of autonomy) or automate the signup process using browser automation. The second option — the one that actually makes agents useful — violates virtually every SaaS provider's ToS.

The Legal Exposure Is Real

This isn't a theoretical concern. The legal implications break down across three dimensions:

For Agent Operators

Companies deploying AI agents that automate signups face breach-of-contract claims. When an agent creates an account using browser automation, the operator has agreed to terms they're immediately violating. This means:

  • Account termination without recourse. The provider can shut down the account at any time, potentially mid-workflow, destroying whatever the agent was building.
  • Data loss and liability. Any data stored in the automated account can be wiped. If that data belongs to the agent operator's customers, there's a chain of liability.
  • Potential litigation. Under the Computer Fraud and Abuse Act (CFAA) and similar legislation, unauthorized automated access to computer systems can carry civil and criminal penalties. Courts have increasingly sided with platform operators in cases involving automated access that violates ToS.

For SaaS Providers

Providers aren't off the hook either. When they discover automated signups — and they will, through behavioral analytics, device fingerprinting, or traffic analysis — they face a set of decisions with no good answers:

  • Ban the agent, lose the revenue. The agent's human principal was a legitimate customer willing to pay. Banning the account means losing a paying user.
  • Ignore it, set a precedent. Looking the other way on ToS violations creates legal exposure and weakens enforcement against actual bad actors.
  • Distinguish agents from bots. Without a proper agent identity framework, there's no way to tell a legitimate AI agent from a spam bot. They look identical in server logs.

For End Users

The humans who authorized the agent are caught in the middle. They may not even know their agent violated the provider's ToS. But their payment method is attached to the account, and they're the ones who'll deal with the fallout.

Why "Just Update the ToS" Isn't Enough

The obvious answer — "just update your Terms of Service to allow agent signups" — misses the point. Even if a provider wanted to welcome agent traffic, they'd need answers to questions their current infrastructure can't handle:

Identity: Who is this agent? Who authorized it? What organization does it belong to?

Authorization: What permissions should it have? Can it access everything a human user can, or should it be scoped?

Accountability: If the agent does something destructive, who's responsible? How do you audit its actions?

Billing: How do you charge for agent usage? Is it the same pricing model as human users?

None of these questions have answers in a traditional signup flow. A form that asks for "First Name, Last Name, Email" is designed for humans. Agents have none of these attributes in any meaningful sense.

The Cost of the Status Quo

Research from the Brookings Institution estimates that by 2027, autonomous AI agents will initiate over 40% of new SaaS account signups. If the current approach doesn't change, that means 40% of new accounts will be in a legal gray zone — vulnerable to termination, creating data liability, and generating unreliable analytics.

The economic incentive is clear: agents are becoming a material revenue channel. Gartner projects that AI agents will drive $47 billion in SaaS spending by 2028. Providers who force this revenue through a legally dubious signup process are building on sand.

Free Tool

How agent-ready is your website?

Run a free scan to see how AI agents experience your signup flow, robots.txt, API docs, and LLM visibility.

Run a free scan →

What a Compliant Agent Onboarding Flow Looks Like

The solution isn't to relax security — it's to create a dedicated, secure pathway for agent access that satisfies legal, security, and business requirements simultaneously.

A compliant agent onboarding flow has five properties:

  1. Verified agent identity. The agent presents a cryptographic identity that includes the framework it's running on, the organization it belongs to, and the human principal who authorized it.

  2. Explicit scope requests. Instead of getting a full user account, the agent requests specific API scopes — read access to the product catalog, write access to a staging environment, whatever it needs for its task.

  3. Human-in-the-loop approval. The agent's human principal receives a notification with the scope request and approves it. This creates a clear authorization chain.

  4. Native credential issuance. The provider issues real API credentials — not scraped session cookies or stolen auth tokens — through their existing auth infrastructure.

  5. Audit trail. Every action is logged with the agent's identity, the human who authorized it, and the scopes that were granted. This satisfies compliance requirements and protects both parties.

The Regulatory Landscape Is Moving Fast

The EU AI Act, which took full effect in early 2026, includes provisions around AI system identification and transparency. Agents that interact with services must be identifiable as AI systems — a requirement that browser automation inherently violates (the whole point of browser automation is to look human).

Similarly, NIST's AI Risk Management Framework emphasizes accountability and governance in AI system interactions. Using browser automation to create accounts is fundamentally ungovernable — there's no logging, no identity verification, and no way to revoke access granularly.

Providers who adopt agent-native onboarding now position themselves ahead of incoming regulation, rather than scrambling to retrofit compliance later.

What Forward-Thinking Providers Are Doing

A growing number of SaaS providers are recognizing that agent traffic is not a threat to be blocked but a channel to be managed. They're implementing:

  • Machine-readable access policies on their signup pages, allowing agents to discover what access is available without scraping HTML.
  • Programmatic access request endpoints that accept identity-verified requests from agents.
  • Agent-specific pricing tiers that reflect the usage patterns of agents (high-volume API calls, short session duration, automated workflows).
  • Integration with agent identity platforms like Anon that handle the identity verification, human approval, and credential management.

The Bottom Line

Agent signups via browser automation are a ToS violation today. They'll be a regulatory violation tomorrow. The providers who solve this now — by creating native, compliant onboarding paths for agents — will capture the fastest-growing segment of SaaS adoption. The ones who don't will watch that revenue go to competitors who did.

The question isn't whether to support agent access. It's whether you want to do it legally.

Get Started

Ready to make your product agent-accessible?

Add a few lines of code and let AI agents discover, request access, and get real credentials — with human oversight built in.

Get started with Anon →