DAYS UNTIL EU AI ACT ENFORCEMENT — August 2, 2026. Also covering: NYC Local Law 144 (live now) · Colorado AI Act (June 30, 2026) · SR 11-7 model validation · GDPR Article 22.
Enforcing · Extraterritorial

GDPR gives individuals the right to an explanation. AI Asset Assurance makes that explanation real.

Article 22 of the GDPR restricts automated individual decision-making and gives data subjects the right to obtain meaningful information about the logic involved. Most AI systems cannot provide this. AI Asset Assurance Shapley attribution generates individual-level causal explanations: "Your application was declined because Variable X contributed 52% and Variable Y contributed 31% of the decision."

Request evaluation
REGULATION STATUS
StatusEnforced — live now
ScopeEU data subjects
Applies toSolely automated decisions
AI Asset Assurance coverageIndividual explanations
Territorial reachExtraterritorial
What the regulation requires

Requirements and how AI Asset Assurance addresses each one

Every requirement maps directly to an AI Asset Assurance capability. One evaluation pipeline covers everything you need.

Right not to be subject to automated decisions

Article 22(1)

Data subjects have the right not to be subject to a decision based solely on automated processing which produces legal effects or similarly significantly affects them.

AI Asset Assurance: Evaluation identifies which decisions in your system are "solely automated" and which have meaningful human involvement. The certification documents whether the system qualifies for Article 22 exemptions (consent, contract necessity, or legal authorization).

Meaningful information about the logic

Articles 13(2)(f), 14(2)(g), 15(1)(h)

Controllers must provide data subjects with meaningful information about the logic involved in automated decision-making, as well as the significance and envisaged consequences.

AI Asset Assurance: Shapley attribution generates individual-level causal explanations — not generic "our model considers these factors" statements. For each decision, the explanation shows which specific variables contributed what percentage, making the logic genuinely meaningful.

Right to contest the decision

Article 22(3)

Where automated decisions are permitted, the controller must implement suitable measures to safeguard the data subject's rights, including the right to obtain human intervention and to contest the decision.

AI Asset Assurance: Causal decomposition makes contestation meaningful. When a data subject challenges a decision, the human reviewer can see exactly which variables drove the outcome and assess whether those factors were appropriate. Without decomposition, "human review" is rubber-stamping.

Data protection impact assessment

Article 35(3)(a)

A DPIA is required for systematic and extensive evaluation of personal aspects based on automated processing, including profiling, on which decisions are based that produce legal effects.

AI Asset Assurance: The evaluation report provides the technical assessment component of the DPIA — documenting system behavior, bias analysis, and risk assessment. Integrates with your broader DPIA process as the independent technical validation.

Penalties: Up to €20 million or 4% of global annual turnover

Violations of data subject rights under Article 22 fall under the higher penalty tier of the GDPR. DPAs across Europe are increasingly scrutinizing automated decision-making in lending, hiring, and insurance.

Who is affected

Is your organization in scope?

These are the organizations most likely to need compliance evaluation under this regulation.

Banks and lenders

Automated credit decisions, loan approvals, and account closures affecting EU data subjects. Instant credit assessments are a primary enforcement target.

HIGH PRIORITY — DPA SCRUTINY

Insurance companies

Automated underwriting, claims processing, and pricing decisions. Algorithmic insurance pricing in the EU is under increasing regulatory attention.

HIGH PRIORITY — DPA SCRUTINY

Employers and HR platforms

Automated hiring, performance evaluation, and termination decisions. AI-powered ATS systems processing EU candidate data are directly in scope.

HIGH PRIORITY

Digital platforms

Content moderation, account suspension, and advertising targeting decisions that significantly affect EU users. Includes social media platforms and e-commerce marketplaces.

MEDIUM PRIORITY

Healthcare providers

AI-assisted diagnostic, treatment recommendation, and triage systems processing EU patient data. Article 9 special category data protections add additional requirements.

HIGH PRIORITY — SPECIAL CATEGORY

US/UK companies with EU customers

GDPR has extraterritorial reach. If you process EU residents' personal data for automated decisions — regardless of where your servers are — Article 22 applies.

EXTRATERRITORIAL REACH
How AI Asset Assurance works

From evaluation to certification in three steps

AI Asset Assurance doesn't just find problems — it identifies root causes with Shapley attribution and prescribes exactly how to fix them.

STEP 01

Map automated decisions

AI Asset Assurance identifies which decisions in your system are "solely automated" under Article 22, which have meaningful human involvement, and which qualify for exemptions. This mapping is the foundation of your compliance posture.

STEP 02

Generate causal explanations

Shapley attribution produces individual-level explanations for each automated decision. Not "the model considered your income and credit history" but "your income contributed 38% to the decision, your credit history contributed 27%, and your employment tenure contributed 19%." Genuinely meaningful.

STEP 03

Enable contestation

When a data subject challenges a decision, the decomposed explanation enables substantive human review. The reviewer sees the specific variables and their contributions, can assess whether the decision was appropriate, and can make an informed override. This satisfies Article 22(3).

AI Asset Assurance vs alternatives

What you get that other approaches don't

Governance platforms help you build internal compliance programs — but they sell software to the companies they evaluate. Consulting auditors provide independence — but take weeks per engagement. AI Asset Assurance combines both: genuine structural independence delivered through a streamlined platform, with patent-protected evaluation methods no other provider offers.

CapabilityGeneric "explainability"LIME / SHAP libraryHolistic AIAI Asset Assurance
Individual-level explanationsTemplate statements✓ Technical outputAggregate metrics✓ Per-decision causal
Meaningful to data subjectsGeneric languageRequires expertiseDashboard metrics✓ Plain-language causal
Enables contestationPartial✓ Decomposed for review
Independent validationSelf-assessmentSelf-implementedSingle platform✓ 5 independent validators
DPIA-ready documentation✓ Templates✓ Technical assessment
Real exposure

What companies face when automated decisions can't be explained

Each scenario reflects documented patterns from DPA enforcement actions, CJEU rulings, and GDPR complaints — applied to automated decision-making covered by Article 22.

GDPR penalty structure: Up to €20 million or 4% of global annual turnover (whichever is higher) for violations of data subject rights including Article 22. DPAs across 30 EEA countries can investigate and enforce independently. Since 2018, GDPR fines have exceeded €4.5 billion cumulatively. Article 22 gives every individual the right not to be subject to automated decisions that produce legal or significant effects — unless the controller can demonstrate safeguards including the right to obtain meaningful information about the logic involved.

AI loan denial can't explain "the logic involved" to rejected applicant
Financial services Up to 4% of turnover

A European digital lender uses an AI model for instant credit decisions. A customer in Germany is denied and exercises their Article 22(3) right to obtain "meaningful information about the logic involved." The lender responds with a generic explanation: "the decision was based on creditworthiness factors including income, employment, and credit history." The customer files a complaint with the BfDI (German DPA). The CJEU has ruled (Case C-634/21, SCHUFA) that merely naming categories of data isn't sufficient — the data subject is entitled to understand how the decision was actually reached, including the significance and consequences of the processing.

WITHOUT EXPLANATION CAPABILITY

The lender can't provide more detail because the model is a black box. The BfDI finds an Article 22 violation — the explanation was not "meaningful." The lender must either stop automated lending decisions for EU data subjects or provide genuine explanations. Remediation requires rebuilding the explanation pipeline. During the enforcement period, every automated decision is a potential violation. At €200M revenue, maximum penalty exposure is €8M.

WITH AI Asset Assurance EVALUATION

Shapley attribution generates per-decision explanations: "Your application was declined primarily because of a 34% contribution from employment tenure (less than 12 months at current employer), 28% from debt-to-income ratio (above 42%), and 19% from credit utilization rate. The most impactful change would be maintaining current employment for 6 additional months, which would improve your score by an estimated 15 points." That's meaningful. That's what Article 22 requires.

CJEU Case C-634/21 (SCHUFA Holding, Dec 2023) — the Court ruled that credit scoring by automated means constitutes automated decision-making under Article 22, and data subjects are entitled to meaningful explanations of the logic involved.

AI content moderation removes posts without explainable reasoning
Platform Up to 4% of turnover

A social media platform uses AI to automatically remove content and suspend accounts. A user in France has their account suspended for "community guideline violations" based on automated content classification. The user contests the decision under Article 22. The platform can identify which guideline was triggered but cannot explain why the AI classified the specific content as violating that guideline — particularly when similar content from other users was not flagged. The disparity suggests the AI's classification logic may correlate with language patterns associated with certain ethnic or religious communities.

WITHOUT EXPLANATION CAPABILITY

The CNIL investigates after multiple complaints about disproportionate content removal affecting French-speaking North African users. The platform cannot explain why the AI treats linguistically similar content differently. Article 22 violation compounded by potential Article 9 violation (processing revealing racial or ethnic origin). For a platform with €10B annual revenue, 4% penalty exposure is €400M. The DSA may also apply, compounding enforcement.

WITH AI Asset Assurance EVALUATION

Cohort analysis reveals the content removal disparity across linguistic and demographic groups. Shapley attribution identifies specific language patterns — dialectal Arabic-French code-switching, certain cultural references — as the features driving disproportionate flagging. The platform adjusts the classification model to distinguish between actual violations and cultural linguistic patterns. Each individual contestation can be answered with specific causal explanation.

Multiple DPAs have investigated social media content moderation as automated decision-making under Article 22. The DSA's transparency requirements for algorithmic systems compound the GDPR exposure.

AI insurance claim processing denies coverage without human review
Insurance Up to 4% + class claims

A European insurer automates health insurance claim processing using AI. Claims under €5,000 are processed entirely by the AI — approved or denied without human involvement. The insurer characterizes this as "human-in-the-loop" because a human designed the rules. But Article 22 requires meaningful human intervention in individual decisions, not just system design. When a policyholder in the Netherlands has a claim denied and requests human review, the "review" consists of a human confirming the AI's output without independent assessment — rubber-stamping, not reviewing.

WITHOUT EXPLANATION CAPABILITY

The AP (Dutch DPA) investigates and finds the insurer's process doesn't meet Article 22(3) — the human review is pro forma, not meaningful. The insurer can't explain to denied policyholders why their specific claim was rejected beyond "it didn't meet the model's criteria." Thousands of claims were processed without valid Article 22 safeguards. The AP has issued fines exceeding €50M for GDPR violations. Every denied claim without meaningful explanation is a potential individual complaint.

WITH AI Asset Assurance EVALUATION

Shapley attribution provides per-claim explanations: "This claim was denied because the treatment code (42% contribution) is classified as elective under your policy terms, and the provider's billing code (31% contribution) doesn't match the diagnosis code." The human reviewer receives the causal decomposition and can make a genuinely informed decision. The policyholder receives a meaningful explanation. The insurer demonstrates Article 22(3) compliance with documented, explainable human oversight.

Comparable: UnitedHealth/nH Predict — AI algorithm denied post-acute care claims with 90% reversal rate on appeal, demonstrating that automated claim denial without meaningful human review produces systematic harm.

AI HR system auto-rejects internal transfer applications using opaque scoring
Employment Up to 4% + labor claims

A European multinational uses AI to score internal employees applying for transfers and promotions across EU offices. The system automatically rejects applications below a threshold score. An employee in Spain, denied a transfer to a senior role, exercises their GDPR right to request information about the automated decision. The HR department can only provide the score — not what drove it. The employee notices that colleagues with similar qualifications but different national backgrounds were approved. The employee files a complaint with the AEPD (Spanish DPA) and simultaneously files a labor discrimination claim.

WITHOUT EXPLANATION CAPABILITY

The employer can't explain the score difference between the complainant and approved colleagues. The AEPD finds an Article 22 violation. The labor court finds the employer cannot demonstrate the decision was non-discriminatory — the burden of proof shifts to the employer under EU anti-discrimination law, and without explanation capability they can't meet that burden. Dual exposure: GDPR penalty plus labor damages, back pay, and potential reinstatement order.

WITH AI Asset Assurance EVALUATION

Shapley attribution provides individual-level explanations for every internal transfer decision — showing exactly which variables drove each score and by how much. The employer can explain to the Spanish employee precisely why their score differed from approved colleagues, addressing specific variables. If the explanation reveals a legitimate difference (relevant experience in the target role), the employer has documented evidence. If it reveals a proxy for national origin, the employer can fix the model before more employees are affected.

Article 22 complaints are increasingly combined with anti-discrimination claims, creating dual GDPR + labor law exposure. The CJEU's SCHUFA ruling strengthened the right to explanation for all automated decisions with significant effects.

€20M
Maximum fine for
Article 22 violations
4%
Of global turnover
(whichever is higher)
€4.5B+
Cumulative GDPR
fines since 2018

Article 22's right to explanation requires per-decision causal attribution — exactly what Shapley decomposition provides. Every individual affected by your AI has the right to understand why. AI Asset Assurance makes that explanation possible, accurate, and documented.

Every automated decision deserves a real explanation.

Not template language. Not aggregate scores. Individual-level causal attribution that satisfies Article 22 and actually means something to the person affected.

Request evaluation