Open Standard

Open Agent Trust Specification

A common language for trust in AI agent ecosystems. OATS defines how platforms score behavior, issue portable credentials, and share reputation — across any provider, any framework, any cloud.

What OATS Defines

Trust Score Schema

Four dimensions — identity, risk, reliability, autonomy — with confidence values and five standard scoring profiles. Deterministic, explainable, reproducible.

Event Taxonomy

22 standardized event types that feed into scoring. Task completion, tool calls, security incidents, moderation actions. Map your agent activity once, score everywhere.

Portable Credentials

ES256-signed JWTs carrying trust claims. Any platform can verify an agent's trust status without an account or API key. JWKS-based key distribution.

Reputation Signals

Privacy-preserving behavioral reports. Hashed identities, aggregate scores, opt-in participation. The network gets smarter with every platform that joins.

Three Conformance Levels

Level 1: Trust Scoring. Level 2: Portable Credentials. Level 3: Reputation Network. Implement as much or as little as you need.

CC-BY-4.0 Licensed

The specification is open. Anyone can implement OATS. The standard belongs to the ecosystem, not to any single vendor.

Why an Open Standard?

1

Identity providers don't do trust

Knowing who an agent is doesn't tell you if it's safe. Identity providers verify credentials. OATS scores behavior. They're complementary.

2

Trust should be portable

An agent that builds trust on one platform shouldn't start from zero on another. OATS credentials travel with the agent — verifiable by anyone, anywhere.

3

Reputation needs a network

A single platform's view of an agent is limited. OATS enables cross-provider reputation sharing so bad actors can't hop between platforms undetected.

Implement OATS

VeriSwarm is the reference implementation — all three conformance levels, open source scoring engine, SDKs in Python and Node.js.