DAYS UNTIL EU AI ACT ENFORCEMENT — August 2, 2026. Also covering: NYC Local Law 144 (live now) · Colorado AI Act (June 30, 2026) · SR 11-7 model validation · GDPR Article 22.
Regulatory coverage

One evaluation pipeline. Every regulation that matters.

AI Asset Assurance doesn’t just evaluate your AI system once and walk away. Five structurally independent validators provide initial evaluation with Shapley causal attribution, then sealed deployment verification continuously confirms the AI running in production is the same system that was evaluated. Ongoing monitoring detects drift, emerging bias, and behavioral changes in real time — mapped to every applicable framework across whichever jurisdictions you operate in.

14
Regulations across
4 continents
7+
Enforcement deadlines
in 2026
~$0
Marginal cost per additional
state report
Select a regulation to learn how AI Asset Assurance addresses each requirement
Aug 2, 2026

EU AI Act

Annex III high-risk systems • Employment, lending, insurance, education

The most comprehensive AI regulation globally. Annex III high-risk systems require conformity assessment with documented risk management, bias testing, and human oversight. Penalties up to €35 million or 7% of global annual turnover. AI Asset Assurance maps directly to Articles 9 through 15.

Art. 9 Risk mgmtArt. 12 Record-keepingArt. 13 TransparencyArt. 15 Accuracy
View full details →
Live now

NYC Local Law 144

Automated Employment Decision Tools • NYC hiring & promotion

Annual independent bias audits required for any AEDT used for NYC jobs. Most auditors calculate selection rates and stop. AI Asset Assurance delivers Shapley decomposition that identifies why disparities exist — not just that they exist. Penalties: $500–$1,500 per violation per person per day.

Impact ratiosCandidate noticeAnnual auditPublication req
View full details →
June 30, 2026

Colorado AI Act

High-risk AI deployers • Algorithmic discrimination

Deployers of high-risk AI must use reasonable care to protect consumers from algorithmic discrimination. Annual impact assessments mandatory. The 'reasonable care' standard means evidence of evaluation IS the defense — and absence of testing is evidence of negligence. AG enforcement only, no private right of action.

Reasonable careImpact assessmentConsumer disclosureAnnual requirement
View full details →
Active — ongoing

SR 11-7 — Model Risk Management

Banks, financial institutions • Fed, OCC, FDIC supervised

Federal Reserve SR 11-7 and OCC 2011-12 require independent model validation for all material models. AI/ML models in lending, credit scoring, fraud detection, and trading are increasingly in examination scope. AI Asset Assurance structural independence exceeds the minimum SR 11-7 requirement.

Independent validationOutcomes analysisOngoing monitoringModel inventory
View full details →
Live now

GDPR Article 22

Automated decisions affecting EU data subjects • Right to explanation

EU citizens have the right to meaningful information about the logic of automated decisions that affect them. Most AI systems cannot provide this. AI Asset Assurance Shapley attribution generates individual-level causal explanations — showing each person exactly which variables drove their decision and by how much. Penalties up to €20M or 4% of global turnover.

Right to explanationContestation supportDPIA integrationIndividual-level
View full details →
Live now

California Employment AI

California employers • FEHA + Civil Rights Council regulations

California's Civil Rights Council regulations make bias testing evidence admissible in employment discrimination lawsuits. Having an evaluation is evidence of diligence. Not having one is evidence of negligence. Private right of action, class actions, uncapped compensatory and punitive damages.

FEHA categoriesEvidentiary standardBusiness necessityPrivate right of action
View full details →
2025-2026

California Healthcare AI

SB 1120, AB 3030, AB 489, AB 2575 • Clinical AI systems

Four laws creating the most comprehensive healthcare AI regulatory stack in the US. SB 1120 prohibits AI-only coverage denials. AB 2575 eliminates the vendor's "doctor should have caught it" defense. Criminal penalties for willful violations.

SB 1120AB 3030AB 489AB 2575
View full details →
Jan 2026

Texas RAIGA

High-risk AI deployers • $2.1T state economy

The Texas Responsible AI Governance Act requires governance programs, impact assessments, and consumer transparency for high-risk AI. The Texas Attorney General has civil investigative demand authority. Energy, healthcare, financial services, and defense sectors all affected.

Impact assessmentAG enforcementConsumer disclosureGovernance program
View full details →
Live now

Illinois AI Employment

AIVIA + HB 3773 • AI video interview & hiring

Illinois requires candidate notification and consent before AI analyzes video interviews, with expanded demographic impact reporting under HB 3773. Any employer recruiting Illinois candidates is subject — regardless of where they're headquartered.

Video AI consentDemographic impactData retentionIHRA categories
View full details →
Active — ongoing

UK AI Regulation

FCA, ICO, MHRA, Ofcom, CMA • Sector-based approach

The UK regulates AI through existing sector regulators — each with real enforcement authority. The FCA governs AI in financial services, the ICO enforces UK GDPR automated decision-making, the MHRA governs AI medical devices. One AI Asset Assurance evaluation satisfies requirements across all five regulators.

FCAICO / UK GDPRMHRACMA
View full details →
Jan 2026

South Korea Basic AI Act

Extraterritorial reach • High-impact AI systems

South Korea's AI Act applies extraterritorially — any AI system affecting Korean users is covered regardless of company location. Requirements include transparency, risk assessment, human oversight, and documentation. Samsung, Hyundai, LG, and every company with Korean customers.

ExtraterritorialRisk assessmentHuman oversightTransparency
View full details →
Active — enforcing

China AI Regulations

GenAI Measures, Algorithm Rules, Deep Synthesis • Multiple laws

China has the most active AI enforcement regime globally with four binding regulations already in effect. Any company selling AI-powered products or services in China faces immediate compliance obligations including consent, content labeling, user rights, and algorithm filing.

GenAI MeasuresAlgorithm RulesDeep SynthesisCAC enforcement
View full details →
Advancing

Brazil AI Framework

Bill 2338 • Risk-based, EU AI Act aligned • 215M population

Brazil's comprehensive AI bill — approved by the Senate and advancing — creates a risk-based framework closely aligned with the EU AI Act. Impact assessments, individual rights to contest AI decisions, and mandatory incident reporting for high-risk systems. Latin America's largest economy.

Risk-basedIndividual rightsImpact assessmentEU-aligned
View full details →
Proposed

Canada AIDA

AI and Data Act (Bill C-27) • High-impact AI systems

Canada's proposed AI and Data Act establishes risk-based requirements for high-impact AI in employment, financial services, healthcare, and essential services. With $900 billion in annual US-Canada trade, AIDA will affect every US company selling AI-powered products into Canada.

High-impact AIBias mitigationTransparencyCross-border
View full details →

Not sure which regulation applies to you?

Tell us about your AI system and we'll map it to every applicable regulation — then scope the evaluation accordingly.

Request evaluation