Cryptographic vs. Declarative Evidence: What Article 12 Actually Asks You to Prove
There are two kinds of evidence in AI compliance. One is declarative: a policy document, a management attestation, an auditor's sign-off that your controls are in place. The other is cryptographic: a record whose integrity is mathematically verifiable without trusting anyone who wrote it.
EU AI Act Article 12 does not name the distinction. It also doesn't need to. Once you read the text carefully and think about what happens when a regulator actually tests your evidence, only one of the two kinds survives.
What the article actually says
The operative language in Article 12(1) is short:
"High-risk AI systems shall technically allow for the automatic recording of events (logs) over the lifetime of the system."
Three words are doing most of the work: automatic, lifetime, and shall technically allow.
"Automatic" rules out logs that require manual curation. The system itself must emit them. "Over the lifetime" rules out logs that start at audit prep time and stop at whatever moment is most convenient — the record must exist continuously from deployment to decommissioning. "Shall technically allow" means your system has to be architecturally capable, not just operationally willing.
Paragraph 2 then requires those logs be sufficient to identify risk-relevant events, support post-market monitoring, and enable the operation monitoring referenced in Article 26(5). In practice, an auditor will ask two questions of your logs: were all the events captured? and have any of them been edited since?
The first question is a completeness problem. The second is an integrity problem. Declarative evidence answers neither.
Why declarative evidence falls apart under scrutiny
Declarative evidence is what most AI governance programs currently produce. A SOC 2 report attests that the organization has controls in place. A policy document declares what the logging pipeline does. A ticket records that a human reviewed an alert.
None of these artifacts prove that the specific log entries in front of the auditor are the same ones that existed when the events actually occurred. They prove the organization has a process for producing trustworthy records. That's not a hostile framing — it's exactly what attestation frameworks are built to do. AICPA describes SOC 2 processing integrity as "complete, valid, accurate, timely, and authorised" processing, established through the auditor's examination of controls.
The problem: AI agents are non-deterministic and often highly autonomous. An auditor reviewing an agent that made an autonomous decision last Tuesday, possibly invoking a tool the operator didn't anticipate, has no transaction ledger to reference — only whatever log the system wrote. If that log lives in a mutable store, the "was anything edited" question is unanswerable. Not "difficult to answer." Unanswerable. A DBA with write access could have edited a row; an operator restarting the shipper could have dropped entries; the application itself could have been reconfigured to stop logging a particular event type. None of these are paranoid hypotheticals. They are ordinary operational failure modes.
Attestation says: "We had good controls, and the auditor verified that our controls work." Cryptographic evidence says: "This specific record has not been altered since the moment it was written, and here is the math that shows it."
What cryptographic evidence looks like
Cryptographic audit evidence has a specific shape. Each record contains a hash of its own content plus the hash of the previous record. The resulting chain has one mathematical property worth memorizing: flipping a single bit in any historical entry changes the hash of every entry that follows. An auditor who wants to verify integrity recomputes the chain from the beginning and confirms it matches the stored hashes. That's the whole protocol.
The chain doesn't prevent tampering. Anyone with write access to the database can still edit a row. What the chain does is make the tampering detectable — and, in a well-designed implementation, detectable by a third party who does not have to trust the organization that wrote the logs.
This is the distinction regulators care about when they audit high-risk systems. A log you can edit without detection is a log whose evidentiary value depends entirely on your word. A log where editing produces a mathematical inconsistency is a log whose evidentiary value is independent of your word.
How Vault implements this
VeriSwarm Vault is the hash-chained ledger inside the platform. Its implementation is intentionally unambitious:
- Each event written to Vault contains a
content_hashand aprevious_event_hash. - The
content_hashis a SHA-256 over the canonical JSON serialization of the event content plus the previous entry's hash. - Every event is flagged
is_immutable = Trueat write time. SQLAlchemybefore_updateandbefore_deletelisteners on theSuiteEventmodel reject any modification of an immutable row — the application itself cannot overwrite a Vault event after it is persisted. - Chain verification is a single function (
verify_vault_chain) that walks the chain from the first entry forward, recomputes each hash, and returns the first position where the stored value disagrees with the recomputed one — or a clean pass if the chain is intact.
The code is visible in the repository (apps/api/app/services/vault.py), under 300 lines. There is nothing clever about it. That's the point: a cryptographic audit chain is not a sophisticated engineering project. It is a small, boring piece of infrastructure that you either have or don't.
What you get in exchange is the ability to answer the Article 12 question honestly. When an auditor asks for evidence of automatic recording over the lifetime of the system, you export a Vault chain, hand them a verification command, and let them confirm the chain's integrity themselves. You are not asking them to trust you.
What to do before August 2
The August 2, 2026 enforcement date for Annex III high-risk systems is fifteen weeks away. Three practical checks:
Ask where your agent logs live. If the answer is "a Postgres table" or "an Elasticsearch index" with no chain verification layer on top, you have declarative evidence. That may still satisfy some auditors — but it won't satisfy a determined one.
Ask what happens if someone deletes a row. If the answer is "we'd notice because of our monitoring," you are still relying on attestation. The question isn't whether you would notice — it's whether a regulator reviewing the evidence six months later could detect it.
Decide which question you want to be answering. Declarative: "we have controls, and auditors have reviewed them." Cryptographic: "here is the chain, here is the verification function, check it yourself."
Both are legitimate evidentiary strategies. The second is the one that survives an adversarial audit without requiring you to be trusted.
VeriSwarm Vault is available on the Max plan. The free Gate tier records policy decisions and moderation flags to the same hash-chained structure, so you can start building a verifiable decision log today and upgrade when Vault's full event coverage becomes relevant. The signup flow takes about a minute.
Your audit log is a claim. Decide whether it's the kind of claim that holds up when someone tries to break it.