Annex III critical infrastructure AI evidence records
AI used in the management or operation of critical infrastructure can affect safety, continuity, and public-service resilience. Compliance teams need decision evidence that connects alerts, model versions, operational context, operator action, and fail-safe controls.
Built for infrastructure operators, risk teams, safety engineering, compliance, and security reviewers responsible for AI-assisted decisions in utilities, transport, digital infrastructure, energy, water, communications, or other critical-service workflows.
Plain-English classification
What this Annex III category means in practice
Critical infrastructure AI is not only about a dashboard prediction. It may influence routing, load balancing, maintenance prioritization, safety alerts, anomaly detection, access decisions, or emergency response. A reviewable evidence trail should show the asset or network segment involved, the operational state, the model output, the policy threshold, and the human or automated action taken after the output.
Example systems
Use cases compliance teams should inventory
Evidence map
Evidence fields to preserve for review
These fields are not a complete compliance program. They are the evidence primitives that make later review possible: who or what acted, what context applied, which artifact or policy was used, how human oversight happened, and whether the record still verifies.
Asset or system context
Record the infrastructure asset, network segment, operating state, sensor source, or service domain involved in the AI event.
Alert or decision type
Capture whether the output was prediction, anomaly, dispatch, triage, rerouting, maintenance priority, or safety escalation.
Model and threshold version
Preserve model version, threshold, ruleset, policy, configuration, and confidence score at the time of output.
Operator action
Record whether the recommendation was accepted, rejected, escalated, overridden, or deferred by a human operator.
Artifact lineage
Link the decision to certified model artifacts, reference data, sensor calibration records, or policy versions.
Verification metadata
Include signed payload, hash, key ID, timestamp, and verifier result so later reviewers can detect tampering.
Provider evidence
If your organization builds or places the system on the market
- Document model limits, operational assumptions, safety constraints, cybersecurity controls, and validation evidence.
- Preserve model and dataset fingerprints used for infrastructure predictions or anomaly detection.
- Define record schemas for alerts, recommendations, operator actions, and fail-safe events.
- Maintain evidence for post-market monitoring and incident reconstruction where the system changes over time.
Deployer evidence
If your organization operates the system in a workflow
- Record operational context, operator response, escalation path, and whether instructions for use were followed.
- Retain logs under deployer control and connect them to incident tickets, maintenance actions, or service-impact reviews.
- Define access control and redaction so sensitive infrastructure details are protected in exported evidence.
- Use evidence bundles to support procurement, regulator, safety board, or incident-review requests.
Audit questions
Questions this evidence trail should answer
- Which asset, network, or operational context was affected by the AI output?
- Which model, threshold, policy, and data source produced the alert or recommendation?
- What human or automated action followed the output?
- Was the decision linked to fail-safe, monitoring, or incident-response procedures?
- Can the record prove integrity without access to the control system?
Workflow
From AI event to reviewable evidence
- 1
Classify the workflow
Identify the intended purpose, operator role, affected persons, and whether the system may fall within an Annex III high-risk category.
- 2
Define required evidence
Choose which decision events, artifacts, model versions, policies, human review events, and retention rules must be recorded.
- 3
Sign records at the point of action
Canonicalize the payload, compute a SHA-256 hash, sign with Ed25519, and preserve the key ID and verification path.
- 4
Export and verify
Give compliance, legal, procurement, or regulators a JSON or PDF bundle that can be verified without production-system access.
Guardrails
Evidence support is not a compliance guarantee
Evidence is not a legal conclusion
CertifiedData can preserve signed, tamper-evident records that support review. It does not determine whether an AI system is high-risk, lawful, fair, accurate, or compliant.
Minimize sensitive data
Use pseudonymous identifiers, references, redaction rules, and retention policies so the evidence trail supports review without overcollecting personal or protected data.
Human oversight remains a governance control
A record can show whether human review was required, performed, or overridden. It does not prove that the human oversight design was legally sufficient.
Scope depends on facts
Annex III classification depends on the intended purpose, user context, sector, role, and deployment facts. Treat these pages as evidence guides, not legal advice.
Start with proof
Generate one signed decision record and verify it yourself.
The anonymous demo shows the evidence model before any integration: payload, hash, signature, key ID, verification result, and exportable evidence record.
FAQ
Does CertifiedData make this system compliant?
No. CertifiedData provides evidence infrastructure: signed decision records, artifact provenance, retention support, and independent verification. Compliance depends on the system, use case, governance process, documentation, testing, oversight, and legal review.
What should we test first?
Start with the anonymous Decision Ledger demo and the sample Article 12 evidence bundle. They show the signed payload, SHA-256 hash, Ed25519 signature, key ID, and verification result before any production integration.
What is the first record to create for critical infrastructure?
Create a signed Decision Ledger sample that captures the event type, system context, evidence references, human review status, and verification metadata. Then compare the sample bundle to your production workflow fields.
Related evidence surfaces