EU AI Act in August? Colorado AI Act in June. US Agent Operators Have a Closer Deadline.
Most compliance decks floating around AI teams right now end in "…by August 2, 2026." That's the EU AI Act. It's the loudest date, it carries the scariest fines, and for a lot of US teams it's also the wrong date to optimize around.
The closer deadline — for anyone making consequential decisions about Americans — is June 30, 2026. That's the Colorado AI Act (SB 24-205, as amended by SB25B-004), now 68 days away. It lands 33 days before the EU enforcement date, and the requirements aren't a lighter-weight echo of Brussels. They're a distinct standard, with named frameworks baked into the text.
What Colorado actually moved
Governor Polis called a special session in August 2025 specifically because the legislature couldn't reconcile competing amendments to the original AI Act. The compromise was to delay — not to weaken. On August 28, 2025, Polis signed SB25B-004, pushing the operative date from February 1, 2026 to June 30, 2026.
Everything else stayed the same. Every substantive obligation, rebuttable presumption, and exemption remains unchanged — the delay bought time, not leniency. The Colorado Attorney General has exclusive enforcement authority. Starting June 30, the clock runs.
What counts as a "consequential decision"
The Act targets "high-risk" AI systems — defined as any system that makes, or is a substantial factor in making, a consequential decision. The statute enumerates the categories:
- Employment and employment opportunities
- Education enrollment or opportunity
- Financial or lending services
- Essential government services
- Healthcare services
- Housing
- Insurance
- Legal services
If an agent in your stack is anywhere in the loop on those decisions — not just making them, but materially influencing them — the Act applies. The original statute's reach is broader than many teams initially assume; "substantial factor" captures systems that rank, filter, or triage, not just the ones that render a final answer.
There's a small-business carveout: deployers with fewer than 50 employees throughout the deployment period are exempt from the public-statement, impact-assessment, and risk-management-policy requirements, provided continuous learning isn't based on the deployer's own data. Most agent-operating teams past early traction will not qualify.
The twist: Colorado names the frameworks
Here's where Colorado diverges from the EU AI Act in a way that matters operationally.
The EU AI Act is framework-agnostic. Article 12 requires automatic logging; Article 14 requires human oversight; Article 9 requires a risk management system. How you build it is your problem.
Colorado is more opinionated. Deployers must implement a risk management policy that is aligned with existing standards including NIST's AI Risk Management Framework and ISO/IEC 42001. Those two frameworks are named by name. The practical effect: if you can produce a mapping showing your controls align to either NIST AI RMF (Govern / Map / Measure / Manage) or ISO 42001 (clauses 4–10), you have a rebuttable presumption of reasonable care.
That is a very specific ask. And it's the kind of ask that makes or breaks an audit week.
What Colorado requires, concretely
Stripped down, the deployer obligations starting June 30:
- Risk Management Policy and Program — aligned to NIST AI RMF or ISO 42001, covering foreseeable risks of algorithmic discrimination across the system lifecycle.
- Impact Assessment — completed before deployment of a high-risk system, reviewed at least annually, refreshed within 90 days after any intentional and substantial modification, and retained for three years.
- Consumer Notification — when a high-risk system makes or substantially contributes to a consequential decision about a specific person, that person must be notified.
- Annual Review — of each deployed high-risk system to confirm it is not producing algorithmic discrimination.
- Public Statement — summarizing the kinds of high-risk systems the deployer deploys and how known and foreseeable risks are managed.
Three years of impact-assessment retention is the operational footnote that tends to get missed. It's not "keep the artifact until the next review." It's "keep every version of the artifact for three years." For an agent that gets meaningfully modified quarterly, that's a retained history, not a single document.
Where VeriSwarm fits
VeriSwarm's compliance module exposes per-tenant reports against all three of the frameworks that US agent operators are now triangulating:
GET /v1/compliance/eu-ai-act— for the August 2 deadlineGET /v1/compliance/nist-ai-rmf— for Colorado's first named frameworkGET /v1/compliance/iso-42001— for Colorado's second named framework
Each report walks the relevant controls, shows which VeriSwarm features provide coverage, counts the evidence artifacts Gate and Vault have collected under your tenant, and returns pass/warn/fail per control with specific remediation recommendations. The endpoints live in the compliance_assessor service; the framework definitions (NIST_AI_RMF_CONTROLS, ISO_42001_CONTROLS) are in the same file, so when a control gets tightened by a framework revision, the mapping updates with the code, not in a spreadsheet attached to an email.
For the mechanical requirements — not the legal attestation, but the evidence that makes the attestation defensible — three VeriSwarm pillars do most of the work:
- Gate event ingestion captures the consequential-decision moments. Every agent action that feeds into a hiring, housing, lending, or healthcare decision flows through Gate's 22-event taxonomy, which gives you the audit trail Colorado's impact assessment needs.
- Vault holds that trail as a hash-chained, immutable ledger with signed exports. Three-year retention isn't a storage policy to remember; it's the default.
- Guard runs the runtime controls — PII tokenization, kill switch, policy scanning — that the risk-management-policy column of your impact assessment has to describe.
None of this is a substitute for counsel. Every organization still needs a lawyer to read SB 24-205 against their specific agent fleet. But the parts of the work that are "prove what your agents did and map it to a named framework" are not the legal parts. They're the infrastructure parts. That's what VeriSwarm is for.
What to do this week
If you haven't inventoried which of your agents touch consequential decisions under Colorado's definition, that's the job for this week. If you have, pull the compliance report against nist-ai-rmf (if that's your chosen framework) and start closing the control gaps that come back as warn or fail. Sixty-eight days is shorter than it sounds when the impact assessment needs signoff, the public statement needs legal review, and the AG's office starts taking complaints on July 1.
You can sign up for a free Gate workspace and hit the compliance endpoints on day one. The scorecard at veriswarm.ai/eu-ai-act-check already walks the EU Annex III criteria; a Colorado variant is next. Until it ships, the NIST AI RMF report is the one that maps most cleanly to what the Attorney General's office will ask about.
August is important. June is closer.