Insights on AI agent security, trust, and compliance.
Composite trust scores tell you an agent is at 723 out of 1000. They don't tell you which axis failed. Here's how four dimensions plus a 22-event taxonomy give you a diagnosis instead of a number.
Most agent identity systems answer who. Almost none of them answer how trustworthy. Here's how a portable, JWKS-verifiable credential carries an agent's live trust score across organizational boundaries — without forcing the relying party to call your API on every check.
A hash-chained audit log only matters if you can run the verification, read the result, and respond when it fails. Here's the endpoint, the response shape, what a real break looks like, and the runbook for the moment it does.
You hardened the prompt. Great. The injection just pivoted to the tool call. Why prompt-layer defenses miss the actual attack path — and what blocking it requires.
Six new AI laws signed in 33 days across NY, CA, NE, WA, ID, and OR. Most of the press coverage is breathless. The actual deployer obligations are narrower — and they rhyme.
Guard Proxy ships four built-in transformers — PII tokenization, context inject, field mask, schema validate — running on every MCP tool call in a fixed order. Here's what each one does, what triggers it, and how to configure your own.
The Colorado AI Act becomes enforceable June 30, 2026 — 33 days before the EU AI Act. For US agent operators, it's the closer deadline, and the requirements name NIST AI RMF and ISO 42001 by name.
The first agent to get kicked off your platform almost certainly has a track record somewhere else. You just can't see it — because every AI vendor silos its trust signals. Here's the mechanical walkthrough of how a privacy-preserving cross-tenant reputation layer actually works: the hashing, the endpoints, the score-blending math, and what a public lookup returns to an unauthenticated caller.
The EU AI Act doesn't explicitly mandate cryptographic audit logs. It mandates records 'over the lifetime of the system' — which is functionally the same thing once an auditor starts asking questions. Here's the difference between declaring your logs are trustworthy and proving it.
Your eval suite said the model scored 94% on BoolQ. Your agent still leaked a customer's SSN on Tuesday. Evaluations grade the model offline. Scoring grades the agent in production. They are not substitutes.
60% of organizations can't terminate a misbehaving AI agent. And the ones that can? Most can't prove it happened. Here's what EU AI Act Article 14 actually requires — and why your kill switch probably isn't compliant.
A poisoned security scanner led to compromised PyPI packages, 119K downloads in 40 minutes, and exfiltrated credentials across the AI stack. The LiteLLM incident is a wake-up call for every team routing LLM traffic through third-party libraries.
Microsoft's Azure MCP Server shipped with no authentication on critical functions. CVSS 9.1. No patch yet. If 'the reverse proxy with auth' is the official mitigation, that's the category we build.
Security researchers filed 30+ CVEs against MCP servers in early 2026, including a CVSS 9.6 RCE in a package downloaded half a million times. August 2 brings €35M fines. The math is not in your favor.
Microsoft just shipped an open-source runtime enforcement toolkit for all 10 OWASP Agentic AI risks. The framework is no longer aspirational — it's a production checklist. Here's how every risk maps to observable, controllable behavior.
Microsoft's new Agent Governance Toolkit tackles all 10 OWASP agentic risks. It's a massive validation of the agent trust category — and a clear signal that DIY governance won't scale.
Most agents pass raw PII directly to third-party MCP tool servers. Zero tokenization. Zero audit trail. Here's what the data shows — and how to stop it without rewriting your agent.
You have an AI agent inventory problem. You just don't know it yet. Shadow agents are the new shadow IT — faster, harder to detect, and exponentially more dangerous.
Agent identity tells you who. Agent trust scoring tells you what they'll do. Why verified AI agents still need continuous behavioral monitoring.
EU AI Act agent compliance: what Articles 12 and 14 require before the August 2, 2026 enforcement deadline, and how to generate audit-ready evidence.
MCP server security has three gaps: PII leakage through tool calls, prompt injection via tool responses, and uncontrolled tool access. Here's how a proxy closes all three.
Agent trust scoring replaces binary access control with behavior-based permissions. Here's how it works and why it matters for AI agent governance.