April 2026's New US AI Laws, Mapped: What They Actually Ask of Operators
The last 30 days produced more enacted US AI law than the previous twelve months. New York amended the RAISE Act on March 27. California's SB 53 has been in force since January 1 but most operators only started reading it last quarter. Four states — Nebraska, Washington, Idaho, Oregon — have all signed conversational-AI safety bills since mid-March. Most of the press coverage treats them as a single regulatory wave. They're not. They're three different waves hitting different shores.
Here's the actual map.
Wave 1: Frontier-developer transparency (NY + CA)
Two laws, near-identical structure, target the same sliver of the industry.
California SB 53 (Transparency in Frontier AI Act) is the only one in force today. Effective January 1, 2026. Civil penalties up to $1M per violation. It splits obligations into two tiers:
- All frontier developers must publish a transparency report before deploying a new or substantially modified frontier model. Capabilities, intended uses, limitations, results of risk assessments, whether third-party evaluators were used.
- Large developers (above revenue and compute thresholds) must additionally publish an annual Frontier AI Framework explaining catastrophic-risk identification, governance structures, cybersecurity measures, and — this is the bit that most NIST AI RMF customers will appreciate — alignment with the NIST AI RMF or ISO/IEC 42001 by name.
Plus: critical safety incident reporting to the California Office of Emergency Services within 15 days, collapsing to 24 hours for an imminent public threat. Plus: anonymous whistleblower channel and non-retaliation policy.
NY S 8828 (RAISE Act amendments) lands a year later, January 1, 2027, but covers the same ground for "Large Frontier Developers." Published Frontier AI Framework. Critical Safety Incident reporting in 72 hours, collapsing to 24 hours for imminent risk of death or serious physical injury. Third-party evaluation of catastrophic risks. Mitigation review before deployment.
Operators of large frontier models reading both statutes: the practical playbook is one playbook, not two. The artifacts that satisfy California satisfy New York with the timestamps adjusted. The Frontier AI Framework you publish for SB 53 covers the bulk of the RAISE Act's framework requirement. Critical safety incidents reported to CalOES on the 15-day cadence are reported to NYDFS on the 72-hour cadence by default — the binding constraint is whichever is shorter, which means everyone runs the 24-hour clock.
Practical floor: write your incident-response runbook to emit a structured event the moment a critical safety incident is identified, regardless of which agency you'll notify. Vault the event. The contemporaneous record is half the regulatory defense; the other half is the runbook itself.
Wave 2: State chatbot safety bills (NE, WA, ID, OR)
Four different bills, four different effective dates, one near-identical pattern.
Nebraska LB 1185 signed April 17, in force on signature. Washington HB 2225 signed March 24, effective January 1, 2027. Idaho S 1297 signed March 31, effective July 1, 2027. Oregon SB 1546 signed April 1, effective January 1, 2027.
Stripped down, the four bills converge on five obligations:
- AI disclosure. The chatbot must disclose it's AI, not a human. Persistent and repeated for minors (NE: persistent; WA: every 1h for minors / 3h for adults; OR: regularly).
- Self-harm and crisis intervention. Detect expressions of suicidal ideation or self-harm. Interrupt the conversation. Refer to crisis resources — 988, Youthline, and equivalents.
- Sexually explicit content prevention for minors. When the operator has reason to believe the user is a minor, prevent the AI from generating explicit content or sexualized interactions.
- Manipulative engagement prohibition for minors. No reward-loop patterns intended to maximize time on platform with minors. No prolonged emotional escalation.
- No mental-health professional impersonation. The AI cannot claim it provides professional mental or behavioral health care.
Oregon adds a sixth: an annual published safety report summarizing crisis-resource referrals, intervention protocols, and clinical-best-practice alignment.
Where the four diverge is enforcement. Nebraska is AG-only civil. Washington and Oregon both ship with private rights of action — Oregon explicitly calls it out as the first chatbot law with consumer standing to sue. Idaho is narrower in scope. The plaintiffs' bar reads these statutes for the private-right teeth, and the Washington and Oregon enforcement risk is meaningfully higher than the Nebraska or Idaho risk.
Practical floor: if you operate a conversational agent that any user under 18 could plausibly access, you need the disclosure, the 988 referral, and the explicit-content guard live before January 1, 2027. The cheapest implementation is a Cortex Workflow with a self-harm detection step + Guard policies enforcing content boundaries when a minor-protected mode is active. The annual report falls out of Vault exports if you've been emitting crisis_resource.referred audit events along the way.
Wave 3: Vertical-specific rules (skipped here)
Tennessee SB 1580 (AI in mental-health practice), Tennessee HB 1513 (deepfake political ad disclaimers), Alabama SB 63 (AI in healthcare coverage decisions), and the eight Utah bills are real laws with real obligations — but they regulate narrower verticals than most agent-platform deployers ever touch. We've left them off this map intentionally; they belong in industry-specific compliance docs, not general agent infrastructure.
What VeriSwarm now exposes
Today, GET /v1/compliance/frameworks returns seven entries:
eu-ai-act— for the August 2026 deadlinenist-ai-rmf— for the named-by-name reference in CA SB 53 and Colorado SB 24-205iso-42001— for the second named-by-name referencecolorado-ai-act— for the June 30, 2026 Colorado deadlineus-state-conversational-ai— consolidated NE LB 1185 + WA HB 2225 + ID S 1297 + OR SB 1546ny-raise-act— S 8828 amendmentscalifornia-sb-53— TFAIA
The four frameworks below the line all currently ship with technical_preview: true. The control-to-statute mapping in each one is a first-pass technical translation; counsel review hasn't run yet, and the report payload says so explicitly. This is the same posture we took on Colorado: ship the tool early so customers can prepare, with the integrity flag visible until it's been reviewed.
The same evidence counters that backed Gate, Vault, Guard, and Cortex Workflows back the new frameworks. There's exactly one new counter — critical_safety_incidents, which counts AuditEvent rows with action critical_safety_incident.*. Wire your incident-response runbook to emit those events, and both the NY RAISE and CA SB 53 incident-reporting controls light up at the same time.
What to do this week
If your stack includes a consumer-facing conversational agent: pull GET /v1/compliance/us-state-conversational-ai and start closing the warns and fails. The earliest binding date in the consolidated framework is already in force in Nebraska.
If you operate or train a frontier model in California: SB 53 is already enforceable. Pull GET /v1/compliance/california-sb-53 and treat the fail controls as immediate work, not a 2027 horizon.
If you're closer to the enterprise end of the deployer spectrum: the NY RAISE Act report is the one to track for staff planning, but the obligations don't bind until January 1, 2027. Use the next nine months to align your framework document with NIST AI RMF or ISO 42001 (both are named by name in the alignment requirement, and both already have compliance reports in this API).
The volume of new law is real. The structure is repetitive. The artifacts that satisfy one statute satisfy most of the others when they're produced once and stored well.