DAYS UNTIL EU AI ACT ENFORCEMENT — August 2, 2026. Also covering: NYC Local Law 144 (live now) · Colorado AI Act (June 30, 2026) · SR 11-7 model validation · GDPR Article 22.
Enforcement: August 2, 2026

EU AI Act enforcement begins August 2, 2026. Is your AI system ready?

Annex III high-risk AI systems in employment, lending, insurance, and education require conformity assessment with documented risk management, bias testing, and human oversight. Most can self-assess — but without independent evaluation, that self-assessment has no credibility. AI Asset Assurance maps directly to Articles 9 through 15.

Request evaluation
REGULATION STATUS
Enforcement dateAugust 2, 2026
ScopeAnnex III high-risk systems
Assessment typeSelf-assessment (most)
AI Asset Assurance coverageArticles 9–15 mapped
Days remaining~133 days
What the regulation requires

Requirements and how AI Asset Assurance addresses each one

Every requirement maps directly to an AI Asset Assurance capability. One evaluation pipeline covers everything you need.

Risk management system

Article 9

Establish and maintain a risk management system throughout the AI system's lifecycle. Identify and analyze known and foreseeable risks. Adopt suitable risk management measures.

AI Asset Assurance: Seven-stage evaluation pipeline identifies risks across 5 independent dimensions. Per-request verification (Real-Time tier) maintains risk assessment throughout the lifecycle. Shapley attribution identifies root causes, not just symptoms.

Data governance

Article 10

Training, validation, and testing data must be relevant, representative, free of errors, and complete. Data governance and management practices must be appropriate.

AI Asset Assurance: Behavioral fingerprinting detects training data issues through output analysis — without requiring access to training data itself. Cohort detection identifies underrepresented groups. Delta analysis pinpoints data-driven bias.

Technical documentation

Article 11

Technical documentation must be drawn up before the system is placed on the market and kept up to date. Must demonstrate compliance with all Chapter 2 requirements.

AI Asset Assurance: Every evaluation produces a digitally signed certification report with decomposed findings mapped to each Article. The report serves as the technical documentation required by Article 11.

Record-keeping

Article 12

High-risk AI systems must allow for automatic recording of events (logs) throughout the system's lifetime, to ensure traceability of the system's functioning.

AI Asset Assurance: Sealed deployment verification creates a tamper-proof record of the system's state at evaluation time. Canary probes generate ongoing behavioral logs. All records are cryptographically signed and timestamped.

Transparency and information

Article 13

Systems must be designed to be sufficiently transparent to enable deployers to interpret the system's output and use it appropriately.

AI Asset Assurance: Shapley attribution generates individual-level causal explanations — not aggregate scores. For each decision, the report shows which input variables contributed what percentage of the outcome, making the system's reasoning interpretable.

Human oversight

Article 14

Systems must be designed to be effectively overseen by natural persons during the period in which the system is in use. Oversight measures must be commensurate with the risks.

AI Asset Assurance: Deployment integrity verification detects whether the system humans approved has changed since sign-off. If the model, prompt, or configuration changes after human sign-off, AI Asset Assurance detects the drift and alerts oversight personnel.

Accuracy, robustness, cybersecurity

Article 15

Systems must achieve appropriate levels of accuracy, robustness, and cybersecurity and perform consistently throughout their lifecycle.

AI Asset Assurance: Five structurally independent validators (different models, organizations, infrastructure) provide Byzantine fault tolerant consensus. Continuous monitoring detects accuracy degradation. Sealed deployments protect against tampering.

Conformity assessment

Article 43

Providers of Annex III high-risk AI systems must carry out conformity assessment before placing the system on the market or putting it into service.

AI Asset Assurance: One evaluation produces documentation that supports your conformity assessment. The AI Asset Assurance certification report is designed to provide the independent technical evidence for your conformity assessment record, with findings mapped article by article.

Penalties: Up to €35 million or 7% of global annual turnover

For violations of prohibited AI practices. Up to €15M or 3% for other non-compliance. These are the largest AI-specific penalties in any jurisdiction worldwide.

Who is affected

Is your organization in scope?

These are the organizations most likely to need compliance evaluation under this regulation.

HR technology vendors

Any AI used for recruitment, screening, hiring, promotion, or termination decisions affecting EU residents. Includes ATS systems, resume screeners, and interview analysis tools.

HIGH RISK — ANNEX III

Financial services

AI used for creditworthiness assessment, credit scoring, risk assessment for life and health insurance pricing. Includes automated lending decisions and insurance underwriting.

HIGH RISK — ANNEX III

Insurance providers

AI systems used for risk assessment and pricing in life and health insurance. Algorithmic underwriting and claims processing that affects policy terms or coverage.

HIGH RISK — ANNEX III

Education technology

AI used for determining access to education, evaluating learning outcomes, or monitoring student behavior during testing. Includes adaptive learning platforms and automated grading.

HIGH RISK — ANNEX III

Public administration

AI used for evaluating eligibility for public benefits, services, or essential services. Includes welfare determination, migration, and border control systems.

HIGH RISK — ANNEX III

US companies with EU customers

The EU AI Act has extraterritorial reach. If your AI system's output is used within the EU — even if your company is US-based — you are in scope. This includes SaaS platforms serving EU enterprises.

EXTRATERRITORIAL REACH
How AI Asset Assurance works

From evaluation to certification in three steps

AI Asset Assurance doesn't just find problems — it identifies root causes with Shapley attribution and prescribes exactly how to fix them.

STEP 01

Evaluate

Five structurally independent validators evaluate your AI system across bias, accuracy, robustness, transparency, and safety dimensions. Each validator runs on separate infrastructure from separate organizations — no single point of failure or compromise.

STEP 02

Decompose & prescribe

Shapley attribution identifies the root cause of any issue found — which specific input variables, training data patterns, or system configurations drive each problem. You don't just learn what's wrong; you learn exactly how to fix it, mapped to specific EU AI Act articles.

STEP 03

Certify & verify

Once your system passes evaluation, AI Asset Assurance seals the deployment with behavioral fingerprinting. Any post-certification change — model update, prompt modification, RAG index swap — is detected automatically. Your certification is valid for as long as the system remains unchanged.

AI Asset Assurance vs alternatives

Multiple platforms target EU AI Act compliance. Here's the honest comparison.

Credo AI, Holistic AI, and Regulativ are all actively marketing EU AI Act solutions. We respect what they do — and we're precise about what only AI Asset Assurance provides.

CapabilityCredo AIHolistic AIAIUCRegulativBig 4 auditAI Asset Assurance
Policy packs / templates✓ Pre-built✓ Pre-built✓ AIUC-1 standard✓ Automated✓ Custom✓ Article-by-article
Continuous monitoring✓ Dashboard✓ Guardian AgentsQuarterly re-auditPoint-in-time✓ Behavioral verification
Compliance documentation✓ Audit-ready✓ Audit-ready✓ Certification report✓ Auto-generated✓ Examination-ready
Risk classification✓ 6 risk domains✓ 8-12 weeks
Enterprise maturity✓ $39M raised, Gartner✓ EU regulator advisory✓ $15M seed, UiPath certified✓ Global presenceNew entrant
Streamlined (no auditor required)Enterprise salesEnterprise salesSchellman audit requiredEnterprise salesEngagement-based✓ Upload I/O data directly
Causal attribution (Shapley)Aggregate scoresAggregate scoresStandard checklist✓ Per-variable decomposition
Structural independenceSingle platformSingle platform✓ Independent + auditSingle platformSingle firm✓ 5 validators, BFT consensus
Deployment integrity (Art. 15)Output monitoringOutput monitoringPoint-in-time✓ 6-layer sealed deployment
Customer-authored evaluation criteriaVendor policiesVendor policiesFixed AIUC-1 standardFixed rulesFirm methodology✓ Living Constitution
Irreversibility + harm-of-inactionImplicit in risk domains✓ Dedicated metrics
Prescriptive remediationPolicy suggestionsPolicy suggestionsFindings reportNarrative recs✓ Per-variable fix + projected improvement
Per-request verification✓ Real-Time tier
Being honest about the competition

Credo AI ($39M raised, Gartner and Forrester recognized, Booz Allen federal partnership) is the most established AI governance platform globally. They cover model-level, agent-level, and application-level governance with pre-built policy packs for EU AI Act, NIST AI RMF, ISO 42001, and more. Their continuous monitoring and compliance documentation are strong. If your primary need is enterprise-wide AI governance workflow and policy management, Credo AI is a leading option.

Holistic AI (London-based, EU regulator advisory role) has the strongest European regulatory relationships of any AI governance company. Their Guardian Agents provide dual-mode monitoring and intervention. They already perform ~20% of NYC LL144 audits, giving them cross-regulation credibility. For companies needing a European-headquartered governance partner, Holistic AI has clear advantages.

Where AI Asset Assurance is different — stated precisely: Three capabilities that no competitor provides, each mapped to a specific EU AI Act article:

1. Article 15 ("functions consistently"): Credo AI and Holistic AI monitor AI outputs — they detect when behavior drifts. AI Asset Assurance seals the entire deployment stack (model + prompt + RAG + tools + safety + infrastructure) and verifies it hasn't changed on every request. Output monitoring detects inconsistency after the fact. Deployment verification prevents it. That's the difference between "we noticed the system changed" and "the system cannot change without invalidating its certification."

2. Article 13 (transparency): Every platform generates compliance documentation. AI Asset Assurance generates per-decision causal explanations — Shapley attribution showing which variables drove each specific output and by how much. This is what Article 13's "sufficiently transparent to enable deployers to interpret a system's output" actually requires for individual decisions.

3. Article 9 (risk management): Every platform identifies risks. AI Asset Assurance identifies the specific variable causing each risk, prescribes the exact fix, and projects the improvement. "Your fairness score is 0.72 because variable X contributes 34% of the disparity via proxy correlation — removing it improves the score to 0.89." That's actionable risk management, not a risk score.

The sharpest framing: Credo AI and Holistic AI are governance platforms — they manage your AI risk portfolio. AI Asset Assurance is the independent evaluation that tells you whether your governance is actually working, which specific variables are causing problems, and exactly how to fix them. Most enterprises deploying high-risk AI in the EU will need both a governance platform and an independent evaluator.

Real-Time tier

Article 15 requires systems to "function consistently." Real-Time proves it.

The EU AI Act doesn't just require evaluation at certification — it requires ongoing consistency. Real-Time verification is the technical implementation of that requirement.

Standard evaluation
Quarterly re-evaluation

Full conformity support every 90 days. Article-by-article mapping. Shapley attribution for every finding. Supports your conformity assessment documentation. Between evaluations, you rely on internal post-market monitoring.

Real-Time tier
Per-request verification

Every request to your AI system is verified against the certified deployment configuration. Model swap, prompt change, RAG index update, safety filter modification — any change is detected before the next request processes. Article 15's "functions consistently" requirement isn't a quarterly check. It's a continuous obligation. Real-Time makes it a continuous verification.

The Article 15 distinction: Credo AI and Holistic AI monitor outputs — they detect when your AI's behavior drifts. AI Asset Assurance Real-Time verifies the deployment — it detects when anything in the AI system changes, before a single output is produced through the modified system. Output monitoring answers "is the AI behaving well?" Deployment verification answers "is this the same AI that was certified?" The EU AI Act requires both. Only AI Asset Assurance provides both.

Real exposure

What companies face without independent AI evaluation under the EU AI Act

Each scenario reflects documented patterns from existing enforcement, litigation, and regulatory findings — applied to Annex III high-risk AI systems that will require conformity assessment by August 2, 2026.

EU AI Act penalty structure: Up to €35 million or 7% of global annual turnover (whichever is higher) for prohibited practices. Up to €15 million or 3% of turnover for violations of provider obligations under Articles 8–15. Up to €7.5 million or 1% of turnover for supplying incorrect information. National market surveillance authorities enforce, with cross-border coordination through the EU AI Office. For a company with €1B annual revenue, a 3% penalty is €30 million.

AI recruitment tool changes after conformity assessment — nobody notices
Employment Up to 3% of turnover

A multinational uses an AI system for candidate screening across 12 EU countries. The system passes its initial conformity assessment. Three months later, the vendor updates the model to improve "time-to-fill" metrics. The update changes how the system weights employment gaps — penalizing career breaks more heavily. Women returning from parental leave and older workers re-entering the workforce are disproportionately affected. Article 15 requires the system to "function consistently" post-deployment — the vendor update broke that consistency.

WITHOUT EVALUATION

The provider has no post-market monitoring system that detects the behavioral change. The modified system operates for months across 12 countries. A national market surveillance authority investigates after candidate complaints. The conformity assessment is invalidated. Article 16(f) requires the provider to have taken corrective action — they didn't because they didn't know. For a company with €500M revenue: up to €15M penalty.

WITH AI Asset Assurance EVALUATION

Real-Time deployment integrity verification detects the model change immediately — the behavioral fingerprint shifts when the vendor pushes the update. The provider is alerted before the modified system processes a single candidate. Shapley attribution identifies the "employment gap weighting" change as the specific variable. The provider pauses, re-evaluates, and addresses the disparity before resuming operations.

Comparable: Mobley v. Workday (US) — class action certified May 2025 alleging AI screening discrimination. The EU AI Act creates a regulatory pathway for the same claims across 27 member states simultaneously.

AI credit scoring denies loans using proxy variables for ethnicity
Financial services Up to 3% of turnover

A European fintech uses AI for consumer credit decisions across Germany, France, and the Netherlands. The model doesn't use ethnicity. But it uses postal code, language preference, and mobile carrier — variables that in European cities correlate with immigration status and ethnic background. Turkish-origin residents in Berlin, North African-origin residents in Marseille, and Moroccan-origin residents in Amsterdam are denied at 1.9x the rate of demographically similar applicants in other neighborhoods. Article 10 requires training data to be "relevant, sufficiently representative, and free of errors." The proxy variables make the data systematically unrepresentative.

WITHOUT EVALUATION

The fintech's Article 9 risk management system tests for direct discrimination but not proxy discrimination. Three national market surveillance authorities open parallel investigations. Article 10 violation for unrepresentative data. Article 13 violation for insufficient transparency about how decisions are made. Cross-border coordination through the EU AI Office amplifies the enforcement. For a fintech with €200M revenue: up to €6M per member state.

WITH AI Asset Assurance EVALUATION

Cohort analysis reveals the 1.9x denial disparity across demographic groups in each market. Shapley attribution identifies postal code, language preference, and mobile carrier as contributing 52% of the disparity — proxy variables for ethnicity. The fintech removes or reweights these variables. The evaluation report documents the finding and remediation, mapping directly to Articles 9, 10, and 13 requirements.

Comparable: Apple Card / Goldman Sachs — investigated for credit limit disparities driven by proxy variables. Under the EU AI Act, the same pattern triggers regulatory enforcement across all 27 member states.

AI triage system in hospital deprioritizes patients by socioeconomic signals
Healthcare Up to 3% + patient safety

A hospital network across three EU member states deploys AI for emergency department triage prioritization. The system predicts patient acuity based on vital signs, symptoms, and historical health records. But the historical records reflect existing healthcare access disparities — patients from lower socioeconomic backgrounds have fewer documented visits, thinner medical histories, and less detailed records. The AI interprets sparse records as lower complexity and assigns lower triage priority. Patients who need urgent care most are deprioritized because their historical data is least complete.

WITHOUT EVALUATION

A patient is deprioritized and suffers a preventable adverse outcome. Investigation reveals the systematic bias in triage scoring. Article 9 violation for inadequate risk management. Article 14 violation for insufficient human oversight — the system overrode clinical judgment without adequate safeguards. The hospital faces regulatory penalties plus medical malpractice liability. Reputational damage across the network.

WITH AI Asset Assurance EVALUATION

Cohort analysis reveals the triage scoring disparity across socioeconomic groups. Shapley attribution identifies "medical record completeness" as contributing 41% of the scoring gap — a proxy for healthcare access, not clinical acuity. The hospital adjusts the model to weight presenting symptoms and vital signs more heavily than historical record depth. The evaluation documents the finding and remediation for the conformity assessment.

Comparable: Obermeyer et al. (Science, 2019) — widely-used US healthcare algorithm systematically disadvantaged Black patients by using healthcare spending as a proxy for health need.

AI insurance pricing model fails transparency requirements under Article 13
Insurance Up to 3% of turnover

An insurer uses AI for motor insurance pricing across the EU. The model produces prices but cannot explain why two similar drivers receive different premiums. Article 13 requires that high-risk AI systems be "sufficiently transparent to enable deployers to interpret a system's output and use it appropriately." When a customer disputes their premium, the insurer can only say "the algorithm determined it." National consumer protection authorities in three member states receive complaints about unexplainable pricing disparities that correlate with neighborhood demographics.

WITHOUT EVALUATION

The insurer cannot satisfy Article 13 because the model is a black box — it produces outputs without interpretable reasoning. Market surveillance authorities find the insurer failed to provide "instructions for use" that enable deployers to interpret outputs. Simultaneous violations across multiple member states. The insurer must withdraw the system from the EU market until transparency requirements are met. Revenue loss during withdrawal plus regulatory penalties.

WITH AI Asset Assurance EVALUATION

Shapley attribution generates per-decision explanations showing which variables drove each premium calculation and by how much. The insurer can tell every customer exactly why their premium is what it is. Cohort analysis identifies neighborhood-correlated pricing disparities. The evaluation report provides Article 13-compliant transparency documentation and Article 9-compliant risk management evidence for the conformity assessment.

Article 13 transparency is the most commonly underestimated EU AI Act requirement. Most AI models cannot explain their individual decisions — which means most will fail conformity assessment without external attribution tools.

€35M
Maximum fine per
prohibited practice
7%
Of global turnover
(whichever is higher)
27
Member states with
coordinated enforcement

AI Asset Assurance evaluation costs a fraction of a single violation. It produces the conformity assessment evidence Articles 9–15 require, with per-decision Shapley causal attribution and prescriptive remediation — capabilities protected by patent-pending filings.

133 days until EU AI Act enforcement.

Don't wait for the deadline. Start your conformity assessment now and have your certification in hand before August 2, 2026.

Request evaluation