Annex III education and vocational training AI evidence records
AI used for admissions, assessment, placement, monitoring, or vocational training decisions can affect educational access and life opportunities. Evidence records should preserve scoring context, rubric versions, model outputs, accommodations, and human review.
Built for education technology providers, universities, vocational programs, public-sector education teams, assessment vendors, and compliance officers responsible for AI-assisted educational decisions.
Plain-English classification
What this Annex III category means in practice
Education and vocational training AI systems may influence admission, access, scoring, progression, certification, program placement, or learning-path decisions. The evidence trail should distinguish routine educational analytics from decisions that materially affect access or outcomes. Reviewers need to know which learner, course, assessment, rubric, data source, model, and reviewer context applied.
Example systems
Use cases compliance teams should inventory
Evidence map
Evidence fields to preserve for review
These fields are not a complete compliance program. They are the evidence primitives that make later review possible: who or what acted, what context applied, which artifact or policy was used, how human oversight happened, and whether the record still verifies.
Learner or applicant reference
Use a pseudonymous student, candidate, or assessment ID rather than storing unnecessary personal data in the record.
Educational context
Capture program, course, assessment, rubric, cohort, accommodation, and decision stage.
Model and scoring version
Record model version, scoring rubric, prompt, threshold, calibration, and configuration used at the time of decision.
Outcome and rationale
Store the score, ranking, flag, eligibility decision, placement, or escalation with reason codes and rationale summary.
Human review and appeal
Record whether a teacher, assessor, administrator, or review board confirmed, changed, or overrode the AI output.
Verification metadata
Preserve hash, signature, timestamp, key ID, and verification URL for independent evidence review.
Provider evidence
If your organization builds or places the system on the market
- Document training data provenance, scoring design, validation results, limits, bias review, and intended educational use.
- Preserve certificates for rubrics, model artifacts, prompt versions, assessment templates, or reference datasets.
- Define record schemas for scores, eligibility events, proctoring flags, placement outputs, and review actions.
- Keep technical documentation aligned with instructions for use and monitoring evidence.
Deployer evidence
If your organization operates the system in a workflow
- Record when the institution or program used the system and whether staff followed the provider's instructions.
- Retain human review actions, appeal outcomes, accommodations, and policy versions relevant to the decision.
- Use redacted exports so compliance teams can review evidence without exposing unnecessary learner information.
- Monitor model behavior across cohorts, courses, or assessment periods and preserve signed monitoring records.
Audit questions
Questions this evidence trail should answer
- Which learner, assessment, program, or training context was affected?
- Which rubric, model, threshold, or scoring configuration applied?
- Was the output advisory, decisive, or subject to human confirmation?
- Were accommodations, appeals, or review rights reflected in the evidence trail?
- Can the record be exported without exposing unnecessary learner data?
Workflow
From AI event to reviewable evidence
- 1
Classify the workflow
Identify the intended purpose, operator role, affected persons, and whether the system may fall within an Annex III high-risk category.
- 2
Define required evidence
Choose which decision events, artifacts, model versions, policies, human review events, and retention rules must be recorded.
- 3
Sign records at the point of action
Canonicalize the payload, compute a SHA-256 hash, sign with Ed25519, and preserve the key ID and verification path.
- 4
Export and verify
Give compliance, legal, procurement, or regulators a JSON or PDF bundle that can be verified without production-system access.
Guardrails
Evidence support is not a compliance guarantee
Evidence is not a legal conclusion
CertifiedData can preserve signed, tamper-evident records that support review. It does not determine whether an AI system is high-risk, lawful, fair, accurate, or compliant.
Minimize sensitive data
Use pseudonymous identifiers, references, redaction rules, and retention policies so the evidence trail supports review without overcollecting personal or protected data.
Human oversight remains a governance control
A record can show whether human review was required, performed, or overridden. It does not prove that the human oversight design was legally sufficient.
Scope depends on facts
Annex III classification depends on the intended purpose, user context, sector, role, and deployment facts. Treat these pages as evidence guides, not legal advice.
Start with proof
Generate one signed decision record and verify it yourself.
The anonymous demo shows the evidence model before any integration: payload, hash, signature, key ID, verification result, and exportable evidence record.
FAQ
Does CertifiedData make this system compliant?
No. CertifiedData provides evidence infrastructure: signed decision records, artifact provenance, retention support, and independent verification. Compliance depends on the system, use case, governance process, documentation, testing, oversight, and legal review.
What should we test first?
Start with the anonymous Decision Ledger demo and the sample Article 12 evidence bundle. They show the signed payload, SHA-256 hash, Ed25519 signature, key ID, and verification result before any production integration.
What is the first record to create for education vocational training?
Create a signed Decision Ledger sample that captures the event type, system context, evidence references, human review status, and verification metadata. Then compare the sample bundle to your production workflow fields.
Related evidence surfaces