CertifiedData.io
EU AI Act · Article 72

EU AI Act Article 72 Post-Market Monitoring: Evidence for AI System Review After Deployment

Answer box

Article 72 should be treated as an evidence workflow, not a static compliance note. Article 72 makes post-deployment evidence part of the governance system. High-risk AI teams need a monitoring plan, evidence streams, review decisions, and a way to update risk, documentation, and incident processes when real-world signals change. CertifiedData and Decision Ledger can support the evidence layer with SHA-256 artifact fingerprints, Ed25519 signatures, RFC 8785-style canonical payloads where appropriate, signed decision records, and exportable evidence bundles. This page is not legal advice and does not claim that any tool alone makes a system compliant.

Official basis to verify before publication

Post-market monitoring system and plan for high-risk AI systems, collecting, documenting, and analyzing relevant data on performance throughout the system lifecycle.

Editorial note: verify exact statutory language, numbering, applicability dates, and any post-publication Commission guidance against official EU sources before publishing. Keep the page framed as audit-readiness and evidence infrastructure, not legal compliance automation.

Why this matters

Many teams monitor uptime but not governance-relevant behavior. Article 72 pushes teams to monitor whether the AI system continues to perform as expected, whether risks emerge, whether deployer feedback matters, and whether documentation or controls need updates.

For CertifiedData, the strategic opportunity is to translate regulatory language into evidence objects. A reader should leave this page understanding what records they may need, why screenshots are weak, how signed artifacts improve reviewability, and when to route into Decision Ledger or an evidence bundle.

Monitoring should produce reviewable records

A post-market monitoring system should produce evidence, not just dashboards. It should capture what was monitored, thresholds, review cadence, signals, anomalies, feedback, and decisions made because of those signals. Decision Ledger can record monitoring reviews, escalation decisions, and control changes.

Connecting monitoring to Articles 9 and 15

Monitoring informs risk management and performance claims. If live performance drifts, risk controls may need reassessment under Article 9. If accuracy, robustness, or cybersecurity assumptions change, Article 15 evidence may need updating. Article 72 is the feedback loop that keeps the evidence graph alive.

Monitoring and incidents

Not every monitoring signal is a serious incident, but monitoring should feed incident review. When a signal suggests harm, malfunction, or unexpected behavior, the record should route to Article 73 analysis. The evidence bundle should show the original signal, reviewer actions, escalation path, and final decision.

CertifiedData's role

CertifiedData can preserve monitoring review records, related artifacts, version references, and signed decisions. It should not claim to perform all post-market surveillance automatically. The value is making the monitoring evidence exportable and verifiable.

Evidence matrix

Evidence areaWhat the team should preserveCertifiedData / Decision Ledger evidence object
Monitoring planDefine signals, thresholds, owners, cadence, and review process.Monitoring plan record
Signal capturePreserve performance, feedback, anomaly, and usage data summaries.Monitoring evidence event
Review decisionRecord whether action is required and who decided.Signed review event
Risk/documentation updatesConnect monitoring results to Article 9 and Article 11 updates.Change decision record
Incident escalationRoute severe signals into Article 73 incident review.Incident triage record

Example machine-readable evidence object

{ "evidence_type": "post_market_monitoring_record", "related_ai_act_articles": [ "Article 72", "Article 9", "Article 15", "Article 73" ], "system_id": "aisys_...", "signal_type": "performance_drift", "review_decision_id": "dec_...", "action_required": true }

This example is intentionally illustrative. Production payloads should be versioned, canonicalized, signed, and linked to public or permissioned verification paths as appropriate.

What CertifiedData can prove

CertifiedData can help prove that a particular evidence payload existed at a particular time, was associated with a stable artifact identifier, was signed by a known key, and has not changed since signing. For datasets and AI artifacts, this can include SHA-256 fingerprints, certificate metadata, issuer identity, timestamp, schema version, and verification status. For Decision Ledger records, it can include actor, action, system version, referenced artifacts, rationale, chain position, hash, signature, and key ID.

What CertifiedData does not prove

CertifiedData does not determine legal compliance, replace conformity assessment, guarantee fairness, prove that a model is accurate, or certify that a risk control is sufficient. It does not turn a weak governance process into a compliant process by itself. Its role is narrower and stronger: preserve verifiable evidence so compliance, legal, engineering, procurement, and audit stakeholders can review the system with less reliance on trust, memory, or screenshots.

FAQ

Is monitoring only a provider obligation?

This page should focus on provider monitoring under Article 72 while linking deployer operational monitoring under Article 26.

What evidence should a monitoring review preserve?

Signal, threshold, system version, reviewed artifacts, decision, owner, action, and follow-up cadence.

How does this support MRR for CertifiedData?

Monitoring creates recurring evidence events. Decision Ledger and evidence bundles become ongoing infrastructure, not a one-time document.

Suggested JSON-LD

Use TechArticle plus FAQPage when converting this Markdown into page.tsx. Include breadcrumbs under /eu-ai-act and keep the canonical URL at https://certifieddata.io/eu-ai-act/article-72-post-market-monitoring.

Editorial checklist

  • Confirm official EU AI Act article wording and current applicability timing.
  • Keep evidence/readiness language; avoid saying "guarantees compliance" or "satisfies the EU AI Act."
  • Preserve at least five internal links.
  • Preserve both CTAs.
  • Add schema JSON-LD in the final TSX page.
  • Keep final user-facing copy above 1,000 words.

Implementation pattern for CertifiedData teams

A practical implementation should start with a small evidence inventory. Identify the system, its intended purpose, the operator role, the datasets and artifacts it depends on, the human decisions that approve or reject its use, and the monitoring signals that should trigger review. Then decide which records belong in CertifiedData certificates and which records belong in Decision Ledger. The goal is not to collect every possible event. The goal is to preserve the records that make a later review possible: what changed, who approved it, what evidence was available, and how the record can be verified.

For this article page, the strongest commercial path is a demo that shows a signed record, a related artifact certificate, and an exportable bundle. The page should invite the reader to move from reading about obligations to seeing how evidence can be structured. Link to the Decision Ledger demo for the fastest proof point, then to the sample evidence bundle for the buyer who needs something to share with legal, procurement, or security.

Make it real

Generate a signed evidence record and verify it yourself.

The anonymous demo turns one AI event into a canonical payload, SHA-256 hash, Ed25519 signature, key id, and verification result — exactly the shape an evidence package relies on.

EU AI Act Article 72 Post-Market Monitoring: Evidence for AI System Review After Deployment | CertifiedData