Annex III high-risk AI systems in employment, lending, insurance, and education require conformity assessment with documented risk management, bias testing, and human oversight. Most can self-assess — but without independent evaluation, that self-assessment has no credibility. AI Asset Assurance maps directly to Articles 9 through 15.
Request evaluationEvery requirement maps directly to an AI Asset Assurance capability. One evaluation pipeline covers everything you need.
Establish and maintain a risk management system throughout the AI system's lifecycle. Identify and analyze known and foreseeable risks. Adopt suitable risk management measures.
Training, validation, and testing data must be relevant, representative, free of errors, and complete. Data governance and management practices must be appropriate.
Technical documentation must be drawn up before the system is placed on the market and kept up to date. Must demonstrate compliance with all Chapter 2 requirements.
High-risk AI systems must allow for automatic recording of events (logs) throughout the system's lifetime, to ensure traceability of the system's functioning.
Systems must be designed to be sufficiently transparent to enable deployers to interpret the system's output and use it appropriately.
Systems must be designed to be effectively overseen by natural persons during the period in which the system is in use. Oversight measures must be commensurate with the risks.
Systems must achieve appropriate levels of accuracy, robustness, and cybersecurity and perform consistently throughout their lifecycle.
Providers of Annex III high-risk AI systems must carry out conformity assessment before placing the system on the market or putting it into service.
These are the organizations most likely to need compliance evaluation under this regulation.
Any AI used for recruitment, screening, hiring, promotion, or termination decisions affecting EU residents. Includes ATS systems, resume screeners, and interview analysis tools.
AI used for creditworthiness assessment, credit scoring, risk assessment for life and health insurance pricing. Includes automated lending decisions and insurance underwriting.
AI systems used for risk assessment and pricing in life and health insurance. Algorithmic underwriting and claims processing that affects policy terms or coverage.
AI used for determining access to education, evaluating learning outcomes, or monitoring student behavior during testing. Includes adaptive learning platforms and automated grading.
AI used for evaluating eligibility for public benefits, services, or essential services. Includes welfare determination, migration, and border control systems.
The EU AI Act has extraterritorial reach. If your AI system's output is used within the EU — even if your company is US-based — you are in scope. This includes SaaS platforms serving EU enterprises.
AI Asset Assurance doesn't just find problems — it identifies root causes with Shapley attribution and prescribes exactly how to fix them.
Five structurally independent validators evaluate your AI system across bias, accuracy, robustness, transparency, and safety dimensions. Each validator runs on separate infrastructure from separate organizations — no single point of failure or compromise.
Shapley attribution identifies the root cause of any issue found — which specific input variables, training data patterns, or system configurations drive each problem. You don't just learn what's wrong; you learn exactly how to fix it, mapped to specific EU AI Act articles.
Once your system passes evaluation, AI Asset Assurance seals the deployment with behavioral fingerprinting. Any post-certification change — model update, prompt modification, RAG index swap — is detected automatically. Your certification is valid for as long as the system remains unchanged.
Credo AI, Holistic AI, and Regulativ are all actively marketing EU AI Act solutions. We respect what they do — and we're precise about what only AI Asset Assurance provides.
| Capability | Credo AI | Holistic AI | AIUC | Regulativ | Big 4 audit | AI Asset Assurance |
|---|---|---|---|---|---|---|
| Policy packs / templates | ✓ Pre-built | ✓ Pre-built | ✓ AIUC-1 standard | ✓ Automated | ✓ Custom | ✓ Article-by-article |
| Continuous monitoring | ✓ Dashboard | ✓ Guardian Agents | Quarterly re-audit | ✓ | Point-in-time | ✓ Behavioral verification |
| Compliance documentation | ✓ Audit-ready | ✓ Audit-ready | ✓ Certification report | ✓ Auto-generated | ✓ | ✓ Examination-ready |
| Risk classification | ✓ | ✓ | ✓ 6 risk domains | ✓ 8-12 weeks | ✓ | ✓ |
| Enterprise maturity | ✓ $39M raised, Gartner | ✓ EU regulator advisory | ✓ $15M seed, UiPath certified | ✓ | ✓ Global presence | New entrant |
| Streamlined (no auditor required) | Enterprise sales | Enterprise sales | Schellman audit required | Enterprise sales | Engagement-based | ✓ Upload I/O data directly |
| Causal attribution (Shapley) | Aggregate scores | Aggregate scores | Standard checklist | — | — | ✓ Per-variable decomposition |
| Structural independence | Single platform | Single platform | ✓ Independent + audit | Single platform | Single firm | ✓ 5 validators, BFT consensus |
| Deployment integrity (Art. 15) | Output monitoring | Output monitoring | Point-in-time | — | — | ✓ 6-layer sealed deployment |
| Customer-authored evaluation criteria | Vendor policies | Vendor policies | Fixed AIUC-1 standard | Fixed rules | Firm methodology | ✓ Living Constitution |
| Irreversibility + harm-of-inaction | — | — | Implicit in risk domains | — | — | ✓ Dedicated metrics |
| Prescriptive remediation | Policy suggestions | Policy suggestions | Findings report | — | Narrative recs | ✓ Per-variable fix + projected improvement |
| Per-request verification | — | — | — | — | — | ✓ Real-Time tier |
Credo AI ($39M raised, Gartner and Forrester recognized, Booz Allen federal partnership) is the most established AI governance platform globally. They cover model-level, agent-level, and application-level governance with pre-built policy packs for EU AI Act, NIST AI RMF, ISO 42001, and more. Their continuous monitoring and compliance documentation are strong. If your primary need is enterprise-wide AI governance workflow and policy management, Credo AI is a leading option.
Holistic AI (London-based, EU regulator advisory role) has the strongest European regulatory relationships of any AI governance company. Their Guardian Agents provide dual-mode monitoring and intervention. They already perform ~20% of NYC LL144 audits, giving them cross-regulation credibility. For companies needing a European-headquartered governance partner, Holistic AI has clear advantages.
Where AI Asset Assurance is different — stated precisely: Three capabilities that no competitor provides, each mapped to a specific EU AI Act article:
1. Article 15 ("functions consistently"): Credo AI and Holistic AI monitor AI outputs — they detect when behavior drifts. AI Asset Assurance seals the entire deployment stack (model + prompt + RAG + tools + safety + infrastructure) and verifies it hasn't changed on every request. Output monitoring detects inconsistency after the fact. Deployment verification prevents it. That's the difference between "we noticed the system changed" and "the system cannot change without invalidating its certification."
2. Article 13 (transparency): Every platform generates compliance documentation. AI Asset Assurance generates per-decision causal explanations — Shapley attribution showing which variables drove each specific output and by how much. This is what Article 13's "sufficiently transparent to enable deployers to interpret a system's output" actually requires for individual decisions.
3. Article 9 (risk management): Every platform identifies risks. AI Asset Assurance identifies the specific variable causing each risk, prescribes the exact fix, and projects the improvement. "Your fairness score is 0.72 because variable X contributes 34% of the disparity via proxy correlation — removing it improves the score to 0.89." That's actionable risk management, not a risk score.
The sharpest framing: Credo AI and Holistic AI are governance platforms — they manage your AI risk portfolio. AI Asset Assurance is the independent evaluation that tells you whether your governance is actually working, which specific variables are causing problems, and exactly how to fix them. Most enterprises deploying high-risk AI in the EU will need both a governance platform and an independent evaluator.
The EU AI Act doesn't just require evaluation at certification — it requires ongoing consistency. Real-Time verification is the technical implementation of that requirement.
Full conformity support every 90 days. Article-by-article mapping. Shapley attribution for every finding. Supports your conformity assessment documentation. Between evaluations, you rely on internal post-market monitoring.
Every request to your AI system is verified against the certified deployment configuration. Model swap, prompt change, RAG index update, safety filter modification — any change is detected before the next request processes. Article 15's "functions consistently" requirement isn't a quarterly check. It's a continuous obligation. Real-Time makes it a continuous verification.
The Article 15 distinction: Credo AI and Holistic AI monitor outputs — they detect when your AI's behavior drifts. AI Asset Assurance Real-Time verifies the deployment — it detects when anything in the AI system changes, before a single output is produced through the modified system. Output monitoring answers "is the AI behaving well?" Deployment verification answers "is this the same AI that was certified?" The EU AI Act requires both. Only AI Asset Assurance provides both.
Each scenario reflects documented patterns from existing enforcement, litigation, and regulatory findings — applied to Annex III high-risk AI systems that will require conformity assessment by August 2, 2026.
EU AI Act penalty structure: Up to €35 million or 7% of global annual turnover (whichever is higher) for prohibited practices. Up to €15 million or 3% of turnover for violations of provider obligations under Articles 8–15. Up to €7.5 million or 1% of turnover for supplying incorrect information. National market surveillance authorities enforce, with cross-border coordination through the EU AI Office. For a company with €1B annual revenue, a 3% penalty is €30 million.
A multinational uses an AI system for candidate screening across 12 EU countries. The system passes its initial conformity assessment. Three months later, the vendor updates the model to improve "time-to-fill" metrics. The update changes how the system weights employment gaps — penalizing career breaks more heavily. Women returning from parental leave and older workers re-entering the workforce are disproportionately affected. Article 15 requires the system to "function consistently" post-deployment — the vendor update broke that consistency.
The provider has no post-market monitoring system that detects the behavioral change. The modified system operates for months across 12 countries. A national market surveillance authority investigates after candidate complaints. The conformity assessment is invalidated. Article 16(f) requires the provider to have taken corrective action — they didn't because they didn't know. For a company with €500M revenue: up to €15M penalty.
Real-Time deployment integrity verification detects the model change immediately — the behavioral fingerprint shifts when the vendor pushes the update. The provider is alerted before the modified system processes a single candidate. Shapley attribution identifies the "employment gap weighting" change as the specific variable. The provider pauses, re-evaluates, and addresses the disparity before resuming operations.
Comparable: Mobley v. Workday (US) — class action certified May 2025 alleging AI screening discrimination. The EU AI Act creates a regulatory pathway for the same claims across 27 member states simultaneously.
A European fintech uses AI for consumer credit decisions across Germany, France, and the Netherlands. The model doesn't use ethnicity. But it uses postal code, language preference, and mobile carrier — variables that in European cities correlate with immigration status and ethnic background. Turkish-origin residents in Berlin, North African-origin residents in Marseille, and Moroccan-origin residents in Amsterdam are denied at 1.9x the rate of demographically similar applicants in other neighborhoods. Article 10 requires training data to be "relevant, sufficiently representative, and free of errors." The proxy variables make the data systematically unrepresentative.
The fintech's Article 9 risk management system tests for direct discrimination but not proxy discrimination. Three national market surveillance authorities open parallel investigations. Article 10 violation for unrepresentative data. Article 13 violation for insufficient transparency about how decisions are made. Cross-border coordination through the EU AI Office amplifies the enforcement. For a fintech with €200M revenue: up to €6M per member state.
Cohort analysis reveals the 1.9x denial disparity across demographic groups in each market. Shapley attribution identifies postal code, language preference, and mobile carrier as contributing 52% of the disparity — proxy variables for ethnicity. The fintech removes or reweights these variables. The evaluation report documents the finding and remediation, mapping directly to Articles 9, 10, and 13 requirements.
Comparable: Apple Card / Goldman Sachs — investigated for credit limit disparities driven by proxy variables. Under the EU AI Act, the same pattern triggers regulatory enforcement across all 27 member states.
A hospital network across three EU member states deploys AI for emergency department triage prioritization. The system predicts patient acuity based on vital signs, symptoms, and historical health records. But the historical records reflect existing healthcare access disparities — patients from lower socioeconomic backgrounds have fewer documented visits, thinner medical histories, and less detailed records. The AI interprets sparse records as lower complexity and assigns lower triage priority. Patients who need urgent care most are deprioritized because their historical data is least complete.
A patient is deprioritized and suffers a preventable adverse outcome. Investigation reveals the systematic bias in triage scoring. Article 9 violation for inadequate risk management. Article 14 violation for insufficient human oversight — the system overrode clinical judgment without adequate safeguards. The hospital faces regulatory penalties plus medical malpractice liability. Reputational damage across the network.
Cohort analysis reveals the triage scoring disparity across socioeconomic groups. Shapley attribution identifies "medical record completeness" as contributing 41% of the scoring gap — a proxy for healthcare access, not clinical acuity. The hospital adjusts the model to weight presenting symptoms and vital signs more heavily than historical record depth. The evaluation documents the finding and remediation for the conformity assessment.
Comparable: Obermeyer et al. (Science, 2019) — widely-used US healthcare algorithm systematically disadvantaged Black patients by using healthcare spending as a proxy for health need.
An insurer uses AI for motor insurance pricing across the EU. The model produces prices but cannot explain why two similar drivers receive different premiums. Article 13 requires that high-risk AI systems be "sufficiently transparent to enable deployers to interpret a system's output and use it appropriately." When a customer disputes their premium, the insurer can only say "the algorithm determined it." National consumer protection authorities in three member states receive complaints about unexplainable pricing disparities that correlate with neighborhood demographics.
The insurer cannot satisfy Article 13 because the model is a black box — it produces outputs without interpretable reasoning. Market surveillance authorities find the insurer failed to provide "instructions for use" that enable deployers to interpret outputs. Simultaneous violations across multiple member states. The insurer must withdraw the system from the EU market until transparency requirements are met. Revenue loss during withdrawal plus regulatory penalties.
Shapley attribution generates per-decision explanations showing which variables drove each premium calculation and by how much. The insurer can tell every customer exactly why their premium is what it is. Cohort analysis identifies neighborhood-correlated pricing disparities. The evaluation report provides Article 13-compliant transparency documentation and Article 9-compliant risk management evidence for the conformity assessment.
Article 13 transparency is the most commonly underestimated EU AI Act requirement. Most AI models cannot explain their individual decisions — which means most will fail conformity assessment without external attribution tools.
AI Asset Assurance evaluation costs a fraction of a single violation. It produces the conformity assessment evidence Articles 9–15 require, with per-decision Shapley causal attribution and prescriptive remediation — capabilities protected by patent-pending filings.
Don't wait for the deadline. Start your conformity assessment now and have your certification in hand before August 2, 2026.
Request evaluation