CertifiedData.io
Annex III \u00b7 Law enforcement

Annex III law-enforcement AI evidence records

AI used by or for law-enforcement contexts requires especially careful evidence discipline. Records should show lawful workflow context, system purpose, data references, model output, reviewer action, and independent verification without expanding access to sensitive case information.

Built for public-sector technology teams, oversight officers, procurement teams, legal reviewers, and system providers that need evidence infrastructure for sensitive law-enforcement AI workflows.

Plain-English classification

What this Annex III category means in practice

Law-enforcement AI workflows may involve triage, analysis, prioritization, pattern detection, risk assessment, evidence review, or decision support. These contexts demand strict separation between the AI output and the human/legal action that follows. Evidence records should preserve context, provenance, reviewer action, and integrity while minimizing exposure of sensitive investigative data.

Example systems

Use cases compliance teams should inventory

AI systems supporting triage, prioritization, pattern analysis, or investigative review.
Decision-support tools that surface leads, alerts, similarities, links, or risk indicators.
Systems used to analyze evidence, communications, images, documents, or case materials.
Resource allocation or operational support tools where AI outputs can influence enforcement action.
Vendor systems procured by law-enforcement agencies with audit, oversight, or procurement obligations.
Human-review workflows where officers or analysts must document acceptance, rejection, or escalation of an AI output.

Evidence map

Evidence fields to preserve for review

These fields are not a complete compliance program. They are the evidence primitives that make later review possible: who or what acted, what context applied, which artifact or policy was used, how human oversight happened, and whether the record still verifies.

Case or event reference

Use controlled references and redaction rules rather than placing sensitive investigative details into public or broadly accessible records.

Lawful workflow context

Capture the intended purpose, authorized use, policy version, operator role, and whether the output was advisory or action-triggering.

Data and artifact references

Reference datasets, watchlists, evidence packages, model artifacts, prompts, or rulesets with certificates or fingerprints where possible.

Output and rationale

Record the AI output, confidence, reason codes, flags, or match details in a structured way suitable for later review.

Human reviewer action

Record whether an officer, analyst, supervisor, or review board accepted, rejected, escalated, or ignored the AI output.

Verification metadata

Preserve hash, signature, timestamp, key ID, and verification result without disclosing sensitive content unnecessarily.

Provider evidence

If your organization builds or places the system on the market

  • Document intended use, prohibited use, limitations, performance evidence, data provenance, and oversight requirements.
  • Preserve fingerprints for model artifacts, reference datasets, rulesets, prompts, and evaluation evidence.
  • Design logging schemas that support oversight while respecting confidentiality and access controls.
  • Maintain monitoring evidence for drift, incident review, access, and system changes.

Deployer evidence

If your organization operates the system in a workflow

  • Record authorized use, operator role, reviewer action, escalation path, and applicable policy for each AI-assisted event.
  • Retain logs under agency or deployer control with appropriate security and access restrictions.
  • Export redacted evidence bundles for oversight, procurement, or legal review where appropriate.
  • Avoid exposing sensitive case details in public verification surfaces; use references and controlled disclosure.

Audit questions

Questions this evidence trail should answer

  • What authorized workflow and intended purpose applied to this AI event?
  • Which data sources, models, or reference artifacts were used?
  • Was the output advisory, investigatory, or tied to an operational action?
  • Who reviewed or approved the AI output before action?
  • Can oversight reviewers verify integrity without broad access to sensitive case systems?

Workflow

From AI event to reviewable evidence

  1. 1

    Classify the workflow

    Identify the intended purpose, operator role, affected persons, and whether the system may fall within an Annex III high-risk category.

  2. 2

    Define required evidence

    Choose which decision events, artifacts, model versions, policies, human review events, and retention rules must be recorded.

  3. 3

    Sign records at the point of action

    Canonicalize the payload, compute a SHA-256 hash, sign with Ed25519, and preserve the key ID and verification path.

  4. 4

    Export and verify

    Give compliance, legal, procurement, or regulators a JSON or PDF bundle that can be verified without production-system access.

Guardrails

Evidence support is not a compliance guarantee

Evidence is not a legal conclusion

CertifiedData can preserve signed, tamper-evident records that support review. It does not determine whether an AI system is high-risk, lawful, fair, accurate, or compliant.

Minimize sensitive data

Use pseudonymous identifiers, references, redaction rules, and retention policies so the evidence trail supports review without overcollecting personal or protected data.

Human oversight remains a governance control

A record can show whether human review was required, performed, or overridden. It does not prove that the human oversight design was legally sufficient.

Scope depends on facts

Annex III classification depends on the intended purpose, user context, sector, role, and deployment facts. Treat these pages as evidence guides, not legal advice.

Start with proof

Generate one signed decision record and verify it yourself.

The anonymous demo shows the evidence model before any integration: payload, hash, signature, key ID, verification result, and exportable evidence record.

FAQ

Does CertifiedData make this system compliant?

No. CertifiedData provides evidence infrastructure: signed decision records, artifact provenance, retention support, and independent verification. Compliance depends on the system, use case, governance process, documentation, testing, oversight, and legal review.

What should we test first?

Start with the anonymous Decision Ledger demo and the sample Article 12 evidence bundle. They show the signed payload, SHA-256 hash, Ed25519 signature, key ID, and verification result before any production integration.

What is the first record to create for law enforcement?

Create a signed Decision Ledger sample that captures the event type, system context, evidence references, human review status, and verification metadata. Then compare the sample bundle to your production workflow fields.

Related evidence surfaces

Annex III law-enforcement AI evidence records | CertifiedData | CertifiedData