CertifiedData.io
EU AI Act · Article 12 · Record-Keeping

Designing AI logs that produce verifiable evidence

Article 12 is not just a logging requirement. For high-risk AI systems, record-keeping should produce traceable, tamper-evident evidence that compliance, legal, security, and engineering teams can inspect after the fact.

This guide explains how to structure Article 12 record-keeping around automatic event capture, signed Decision Ledger records, certified artifact references, and independent verification. It is an engineering reference for compliance readiness, not legal advice.

Forward to compliance, legal, or your regulator. No account required.

Executive summary

Article 12 readiness depends on traceability, not log volume.

Traditional application logs are useful for operations, but they are rarely sufficient for governance review. They may be mutable, distributed across systems, hard to correlate to a specific AI decision, or dependent on privileged access to the production environment. A compliance-grade record-keeping architecture should produce evidence that can be exported, inspected, and independently verified.

Article 12 in plain language

High-risk AI systems should automatically record events over their lifetime.

Article 12 focuses on record-keeping for high-risk AI systems. The core engineering obligation is that the system must technically allow automatic recording of events, often referred to as logs, throughout the system lifetime. The logging capability should support traceability appropriate to the intended purpose of the system.

High-risk AI systems shall technically allow for the automatic recording of events over the lifetime of the system. Logging capabilities should ensure a level of traceability of the system's functioning that is appropriate to the intended purpose. Consult the official Regulation text and counsel for authoritative interpretation.

Questions an Article 12-ready log should help answer

Which AI system, model, agent, or workflow produced the output?
When was the system operating, and which version or configuration was active?
Which input, reference database, dataset, prompt, tool call, or artifact was used?
What result was returned, and what reason codes or rationale were recorded?
Was the event routine, exceptional, overridden, escalated, or security-relevant?
Can an auditor detect if the record was altered after issuance?
Can the record be verified without trusting the application that created it?

Evidence model

Start by defining the unit of record.

An Article 12 logging architecture should not begin with a storage technology. It should begin with the smallest event that must be reviewable later: a high-risk decision, recommendation, action, override, escalation, or lifecycle event that can materially affect a person, organization, asset, or regulated process.

Runtime decision records

Capture the AI action or decision event itself: actor, timestamp, subject, output, reason codes, rationale summary, model or workflow version, policy context, and correlation IDs.

Artifact lifecycle records

Capture the provenance of datasets, model artifacts, prompts, reference databases, generated outputs, and supporting documentation used by the AI system.

Event classExamplesWhy it matters
Operating periodService start, service stop, deployment window, batch job run.Shows when the system was active and which operating context applied.
Decision or outputApproval, denial, ranking, recommendation, risk score, agent action.Links system functioning to the user-visible or business-visible effect.
Reference data accessReference database, ruleset, embedding index, dataset, prompt library.Supports traceability between output and data context.
Artifact versionModel version, dataset fingerprint, configuration hash, prompt hash.Prevents ambiguity about what actually produced the event.
Human interventionOverride, escalation, approval, rejection, review note.Shows where human oversight affected the outcome.
Exception or risk signalOut-of-distribution flag, confidence breach, policy failure, security event.Helps identify situations that may present risk or require substantial modification.

Decision Ledger schema

A record should be structured before it is signed.

Machine-verifiable record-keeping depends on a stable payload shape. The exact schema varies by domain, but the record should separate identity, event context, decision content, references, integrity metadata, and verification metadata.

{
  "record_id": "dec_01hv...",
  "timestamp": "2026-05-04T12:00:00Z",
  "actor": {
    "type": "ai_system",
    "system_id": "credit-risk-workflow",
    "model_version": "risk-model-2026-04-18"
  },
  "entity": {
    "type": "application",
    "correlation_id": "case_7f32..."
  },
  "decision": {
    "outcome": "manual_review_required",
    "reason_codes": ["income_variance", "thin_file"],
    "rationale_summary": "Application requires human review because two risk signals exceeded policy thresholds."
  },
  "references": {
    "dataset_hash": "sha256:...",
    "model_artifact_hash": "sha256:...",
    "policy_version": "underwriting-policy-2026-03",
    "certificate_url": "https://certifieddata.io/verify/..."
  },
  "integrity": {
    "payload_hash": "sha256:...",
    "previous_record_hash": "sha256:...",
    "canonicalization": "RFC8785-JCS",
    "signature_algorithm": "Ed25519",
    "key_id": "cd-key-2026-01",
    "signature": "base64url..."
  }
}

CertifiedData primitive mapping

CertifiedData contributes verifiable evidence, not a legal shortcut.

The CertifiedData stack separates three concerns that are often mixed together: the decision record, the artifact provenance record, and the independent verification surface.

PrimitiveWhat it providesArticle 12 contribution
decision-recordA signed, machine-readable record of a decision or event, including actor, outcome, explanation, entity, timestamp, and integrity metadata.Directly supports traceability of system functioning and review of specific decisions.
signed-certificateEd25519-signed certificate over an artifact's SHA-256 fingerprint, algorithm specification, and metadata.Lets logs reference the exact dataset, model artifact, prompt, manifest, or AI output involved.
canonicalizationDeterministic JSON canonicalization, such as RFC 8785 JCS, before signing.Lets third parties reproduce the exact signed bytes and avoid signature-equivalent ambiguity.
signing-key-registryPublic Ed25519 verification keys, including retired keys where needed for historical verification.Supports independent verification even after key rotation.
public-registryPublic or controlled verification URLs for artifacts and selected evidence records.Gives reviewers a stable path to verify provenance and integrity without production access.
manifest-certificationBatch certification for groups of related artifacts from a structured manifest.Supports CI/CD and AI Bill of Materials workflows where multiple artifacts must be linked.

Practical result

An Article 12 record can say: this AI system produced this decision at this time, under this policy and model version, using these certified artifacts, and the record has not been silently altered since issuance.

Provider and deployer responsibilities

Logging architecture must be shared, but responsibilities are not identical.

Provider evidence package

  • Document the system's logging capability and event taxonomy.
  • Certify training datasets, synthetic datasets, model artifacts, manifests, or output sets where relevant.
  • Publish or share verification keys and certificate verification instructions.
  • Provide deployers with integration guidance for logging runtime events.

Deployer operating controls

  • Decide which high-risk decisions and lifecycle events must be logged.
  • Configure retention, access control, and export procedures.
  • Link records to user-visible actions through correlation IDs.
  • Maintain review procedures for overrides, incidents, appeals, and authority requests.

Implementation checklist

Move from requirement to operating evidence.

WorkstreamChecklistEvidence output
System classificationConfirm whether the workflow is a high-risk AI system, a component, or an adjacent governance process. Capture legal basis and owner.Classification memo, risk owner, system inventory entry.
Event taxonomyDefine decision events, lifecycle events, exception events, and reference-data events. Decide what does not need a signed record.Article 12 event taxonomy and logging policy.
Record schemaDefine required fields, optional fields, redaction rules, pseudonymous identifiers, and correlation IDs.Versioned schema and sample records.
Integrity modelChoose canonicalization, hashing, signature algorithm, key IDs, key rotation policy, and verification URL format.Verification specification and key registry.
Artifact linkageLink decisions to certified datasets, model artifacts, prompt fingerprints, policy versions, and generated outputs where relevant.Artifact registry references and certificate IDs.
Retention and accessSet retention windows, storage controls, export procedures, and access logging for the logs themselves.Retention schedule, access policy, export runbook.
Audit workflowDefine who can request records, how verification is performed, how exceptions are reviewed, and how counsel is involved.Audit response playbook and review worksheet.

Walk a real system through the full readiness picture with the EU AI Act audit-readiness checklist — 50 questions mapped to specific articles.

Common failure modes

What makes AI logs weak in review?

Mutable operational logs only

Privileged users can alter or delete records without a clear integrity signal.

Sign decision records and preserve hash-linked exports.

No artifact references

The log says a model acted, but not which model, dataset, policy, or reference source was used.

Link each decision to artifact fingerprints and certificate IDs.

Human-readable text only

Auditors cannot reliably query, compare, or automate verification.

Use machine-readable reason codes alongside rationale summaries.

No key rotation plan

Historical records become difficult to verify after signing keys change.

Maintain a public or controlled key registry with retired keys retained for verification.

Overcollection of personal data

Logs create privacy risk and make export harder.

Use pseudonymous entity IDs, references, and minimization rules.

Compliance claims without controls

A tool is presented as full compliance, but policy, retention, and governance remain undefined.

State what the evidence layer contributes and what remains the organization's responsibility.

Internal review worksheet

Use the PDF as the forwardable review packet.

The downloadable PDF includes a signature-ready internal review worksheet for the AI system owner, engineering owner, and compliance or legal reviewer. This is the artifact compliance officers can forward internally after landing on the SEO page.

Download Article 12 PDF

Talk to a human

Book a 15-minute AI Act evidence review.

Free consult for compliance officers, AI governance leads, or technical buyers evaluating Article 12 evidence approaches. No sales pitch.

Book office hours →

Build the audit trail you can prove

Move from internal logs to independently verifiable AI lifecycle records.

CertifiedData helps teams connect signed AI decision records, certified artifact references, public verification keys, and registry surfaces into an evidence layer that can be checked after the fact. It supports compliance readiness without overclaiming automatic legal compliance.

Related

This guide is an engineering reference and does not provide legal advice. Read Regulation (EU) 2024/1689, consult counsel, and verify fit for your specific deployment.

EU AI Act Article 12 Record-Keeping | CertifiedData