CertifiedData.io
EU AI Act · Article 86

Article 86 explanation-request evidence for AI decisions

Article 86 creates pressure to explain AI-assisted decisions in a way that is specific, traceable, and reviewable. The hardest part is not writing an explanation after the fact; it is preserving the evidence needed to support the explanation before a request arrives.

This page explains how Decision Ledger records, artifact references, reason codes, oversight markers, and verification metadata can support explanation-request workflows. It is an engineering reference for compliance-readiness, not legal advice.

Executive summary

The evidence question is what you can prove later.

Documentation explains expected behavior, but high-risk AI review asks for proof of actual behavior. CertifiedData's evidence model records AI decisions as canonical JSON payloads, hashes them, signs them with Ed25519, links them to artifacts where relevant, and exposes verification paths that do not require privileged dashboard access.

Obligation themes

What teams need to operationalize

Decision-specific context

Preserve the actor, entity, output, selected option, timestamp, model or system version, and policy context for each explainable decision.

Reason codes and rationale

Capture machine-readable reason codes plus a human-readable rationale summary that can be reviewed and redacted if needed.

Artifact lineage

Connect the decision to datasets, model artifacts, prompt packages, policy documents, or reference data that materially influenced the output.

Human oversight state

Record whether review was required, whether a person intervened, and whether the final outcome differed from the AI-supported output.

Integrity proof

Make the evidence tamper-evident so a requester, auditor, or reviewer can trust that the record was not modified after the fact.

Controlled disclosure

Design exports so they can support explanation without exposing protected IP, security-sensitive logic, or unnecessary personal data.

Evidence model

Records that make review possible

Signed decision record

The core evidence unit shows what happened and provides the canonical payload used for verification.

Explanation fields

Reason codes, rationale summary, confidence where appropriate, and selected option provide the starting point for a meaningful explanation.

Certified artifact references

Artifact IDs or hashes identify the model, prompt, policy, dataset, or generated output involved in the decision.

Oversight events

Linked records show whether a human review step confirmed, changed, escalated, or rejected the AI-supported output.

Verification result

A hash and Ed25519 signature check shows that the explanation evidence matches the signed record.

Exportable evidence bundle

The bundle provides legal, compliance, and support teams with structured data they can review before responding.

Implementation workflow

A practical rollout pattern

  1. 1

    Define explanation triggers

    Identify which decisions may generate explanation requests and what evidence those requests require.

  2. 2

    Standardize reason codes

    Create a reason-code taxonomy that is meaningful to compliance, product, and support teams rather than only to developers.

  3. 3

    Link context at decision time

    Record model version, policy version, artifact references, and oversight status when the decision occurs, not when the request arrives.

  4. 4

    Create response workflow

    Route request intake, record retrieval, legal review, redaction, and final response through defined owners.

  5. 5

    Verify before disclosure

    Before sending an explanation, confirm the signed record, hash, and referenced artifacts still verify.

Review questions

Questions an evidence trail should answer

Can the organization find the exact decision record that triggered the explanation request?
Does the record include reason codes or rationale that are meaningful outside engineering?
Can the decision be linked to the model, prompt, policy, or artifact version used at the time?
Can legal or compliance verify the record before preparing a response?
Can sensitive information be redacted while preserving enough evidence for review?
Can the organization show whether a human reviewed or changed the AI-supported result?

Failure modes

Where audit evidence usually breaks

Post-hoc narrative only

A written explanation prepared after a complaint may be useful, but it is weaker if the underlying decision record is missing or mutable.

No reason-code discipline

Free-text rationale alone is hard to analyze, hard to compare, and hard to use consistently across requests.

Broken artifact references

If the decision record does not identify the active model, prompt, policy, or data source, explanation quality suffers.

Over-disclosure risk

Explanation exports should support rights and review without unnecessarily exposing trade secrets, security controls, or unrelated personal data.

FAQ

Common compliance-team questions

Does Article 86 require exposing the full model?

This page does not provide legal advice. The evidence model focuses on preserving decision-specific context and integrity proof so qualified teams can prepare appropriate responses.

Can CertifiedData generate the final explanation text?

CertifiedData provides evidence infrastructure. Final explanation wording should be prepared through the organization's legal, compliance, and operational process.

Why use signed records for explanation requests?

Signed records reduce disputes about whether the evidence was changed after the decision, complaint, or request.

How does Article 86 relate to Articles 12, 13, and 14?

Article 12 helps preserve records, Article 13 supports transparency context, Article 14 supports oversight evidence, and Article 86 may require using that evidence to support a decision-specific explanation.

Sector evidence

Article 86 in the sectors that drive most explanation requests

Article 86 right-to-explanation triggers most often in lending (adverse-action defense), insurance (denial / loading), employment (rejection / termination), and public-sector (benefits / eligibility) deployments. The sector evidence pages show how the per-decision Decision Ledger record format supports those explanation requests.

Start with proof

Convert one AI event into a signed evidence record.

Decision Ledger turns an AI event into a canonical payload, SHA-256 hash, Ed25519 signature, key ID, verification result, and exportable evidence bundle. The demo is anonymous and does not require an account.

EU AI Act Article 86 Explanation Evidence | CertifiedData | CertifiedData