DAYS UNTIL EU AI ACT ENFORCEMENT — August 2, 2026. Also covering: NYC Local Law 144 (live now) · Colorado AI Act (June 30, 2026) · SR 11-7 model validation · GDPR Article 22.
Enforcing now · California

In California, AI bias testing is evidence. Not having it is also evidence.

California's Civil Rights Council regulations make bias testing evidence admissible in employment discrimination lawsuits. If you test and find nothing, it supports your defense. If you test and fix what you find, it demonstrates diligence. If you don't test at all, the absence of testing is evidence of negligence. AI Asset Assurance provides independent bias testing with causal attribution and structural independence.

Request evaluation
REGULATION STATUS
StatusEnforced — live now
ScopeCA employment decisions
Legal frameworkFEHA + CRC regulations
AI Asset Assurance coverageFull bias evaluation
Litigation trendIncreasing AI claims
What the regulation requires

Requirements and how AI Asset Assurance addresses each one

Every requirement maps directly to an AI Asset Assurance capability. One evaluation pipeline covers everything you need.

Prohibition of AI-enabled discrimination

FEHA + CRC Regulations

California's Fair Employment and Housing Act prohibits employment discrimination. The Civil Rights Council regulations explicitly extend this to AI and algorithmic systems used for hiring, promotion, and termination decisions.

AI Asset Assurance: Five independent validators evaluate your AI system across all FEHA-protected categories — race, color, religion, sex, gender identity, sexual orientation, national origin, ancestry, disability, age, and more. Any disparate impact is decomposed to root causes with remediation prescriptions.

Evidentiary standard for bias testing

CRC Regulations

Bias testing results are admissible as evidence in discrimination proceedings. An employer who conducts testing and acts on findings demonstrates reasonable efforts. An employer who does not test faces an inference of negligence.

AI Asset Assurance: The digitally signed certification report is designed to serve as evidence. Five structurally independent validators make the evaluation more credible than a single auditor's opinion. Shapley attribution demonstrates you understand the system's behavior at a causal level.

Business necessity defense

FEHA § 12940

If disparate impact is found, employers must demonstrate that the AI system is consistent with business necessity and that no less discriminatory alternative is available.

AI Asset Assurance: Shapley decomposition identifies exactly which variables drive disparate impact. This enables a targeted business necessity analysis — you can demonstrate why each contributing variable is job-related, rather than defending the entire system as a black box. Remediation prescriptions identify less discriminatory alternatives.

Record-keeping obligations

CRC Regulations

Employers must maintain records of AI system usage, testing results, and any remedial actions taken. Records must be retained for the applicable statute of limitations period.

AI Asset Assurance: Every evaluation generates a comprehensive record including system characterization, evaluation methodology, findings with causal attribution, and remediation recommendations. All records are digitally signed and timestamped. Sealed deployment verification provides an ongoing record of system integrity.

Exposure: Private lawsuits, class actions, and DFEH enforcement

California allows private right of action for employment discrimination, including class actions. DFEH can also investigate and bring enforcement actions. AI discrimination claims are an active and growing area of litigation. Compensatory and punitive damages are uncapped in California.

Who is affected

Is your organization in scope?

These are the organizations most likely to need compliance evaluation under this regulation.

California employers

Any employer with 5+ employees using AI for employment decisions in California. Includes remote employers hiring California residents. California has the broadest FEHA protections in the country.

DIRECTLY IN SCOPE

Staffing and recruitment

Staffing agencies and RPO providers using AI screening tools for California positions. The agency and the client employer may both face liability.

DIRECTLY IN SCOPE

HR technology vendors

ATS, assessment, and screening tool providers whose customers operate in California. Vendor liability theories are expanding, and customer contracts increasingly require bias testing.

INDIRECT — CUSTOMER DEMAND
How AI Asset Assurance works

From evaluation to certification in three steps

AI Asset Assurance doesn't just find problems — it identifies root causes with Shapley attribution and prescribes exactly how to fix them.

STEP 01

Evaluate across all FEHA categories

Five structurally independent validators evaluate your AI system across every FEHA-protected category — race, sex, age, disability, and more. The evaluation identifies disparate impact before it becomes a lawsuit.

STEP 02

Decompose and defend

If disparate impact is found, Shapley attribution identifies exactly which variables drive it. This enables a targeted business necessity defense — "Variable X contributes 43% of the disparity and is directly job-related because..." — instead of defending a black box.

STEP 03

Document and protect

Receive a digitally signed certification report that serves as evidence of diligence. If a claim is filed, the report demonstrates: you tested, you understood the results at a causal level, and you took appropriate action. This is the defense that matters.

AI Asset Assurance vs alternatives

What you get that other approaches don't

Governance platforms help you build internal compliance programs — but they sell software to the companies they evaluate. Consulting auditors provide independence — but take weeks per engagement. AI Asset Assurance combines both: genuine structural independence delivered through a streamlined platform, with patent-protected evaluation methods no other provider offers.

CapabilityNo testingInternal reviewLaw firm auditAI Asset Assurance
Evidentiary valueInference of negligenceLimited — self-assessment✓ Expert opinion✓ Independent, decomposed
Causal attribution✓ Shapley decomposition
Business necessity supportGeneral analysis✓ Legal argument✓ Variable-level evidence
Structural independenceSingle firm✓ 5 validators
Remediation prescriptionLegal advice✓ Technical fix instructions
Ongoing monitoringAd hocAnnual review✓ Quarterly (Standard)
Per-request verification✓ Real-Time tier
Real-Time tier

Every AI decision verified. Every decision defensible.

In California, the evidentiary standard favors documented diligence. Real-Time verification creates a per-decision compliance record that transforms your litigation posture.

Standard evaluation
Annual / quarterly evaluation

Full FEHA-aligned evaluation with Shapley attribution. Documented evidence of diligence. Business necessity analysis. Litigation-ready reporting. Strong defense — but a plaintiff's attorney can argue the AI changed between evaluations.

Real-Time tier
Per-request verification

Every hiring decision, every performance score, every promotion recommendation is verified at the moment it's made. The Shapley attribution for each individual decision is logged. When a plaintiff asks "was the AI fair on the day my client was rejected?" — you have the exact causal decomposition for that specific decision, not an aggregate from last quarter.

The litigation difference: A plaintiff's attorney deposes your head of HR. "How do you know the AI was fair when it rejected my client on March 15?" With annual evaluation: "We tested it last January." With Real-Time: "Here is the verification receipt and Shapley attribution for that exact decision. The AI was the same certified system, and these five variables drove the outcome." That's the difference between surviving a motion to dismiss and going to trial.

Real exposure

What California companies actually face without independent AI evaluation

These aren't hypotheticals. Each scenario reflects documented patterns from active litigation, regulatory enforcement, and published research — applied to common AI use cases in California.

Why California is different: FEHA covers more protected categories than federal law. Private right of action means anyone can sue — not just the AG. Compensatory and punitive damages are uncapped. The CRC regulations explicitly make bias testing evidence admissible — creating an evidentiary asymmetry where having testing helps your defense and not having testing builds the plaintiff's case.

AI resume screening rejects applicants using proxy variables for race
Hiring $50M–$100M+

A tech company uses an ATS with AI screening to process 50,000 applications per year. The AI was trained on historical hiring data that reflects past demographic patterns. A "university prestige" variable acts as a proxy for race and socioeconomic status. The rejection rate for one group is 78% while another is 52%. Under FEHA, rejected applicants don't need to prove intent — disparate impact is sufficient.

WITHOUT EVALUATION

The company can't explain why the disparity exists. No testing means no evidence of diligence. Class action representing every affected applicant over four years. FEHA is a fee-shifting statute — plaintiff's attorneys get paid by the defendant.

WITH AI Asset Assurance EVALUATION

Shapley attribution identifies the "university prestige" variable as contributing 34% of the disparity — a proxy for race. The company reweights it before processing applications. The disparity is addressed before a single applicant is harmed.

Comparable: Mobley v. Workday — class action certified May 2025, potentially covering millions of applicants.

AI performance scoring drives discriminatory layoff decisions
Workforce $150M–$300M

A company with 3,000 employees uses AI-assisted performance scoring during a reduction in force. Communication pattern analysis penalizes non-native English patterns and fewer internal network connections — variables correlating with national origin, disability, and sex. 400 employees are terminated, disproportionately affecting women, employees over 40, and employees of Asian and Hispanic origin.

WITHOUT EVALUATION

CRC regulations required proactive testing before AI-driven termination decisions. The company didn't test. Each terminated employee has a private right of action. Back pay for 400 employees at $120K avg × 2 years = $96M before emotional distress and punitive damages.

WITH AI Asset Assurance EVALUATION

Cohort analysis flags the disparate impact before the RIF executes. Shapley attribution isolates "communication pattern" metrics as the primary driver. The company adjusts the scoring model, re-runs evaluation, and confirms equitable outcomes. The RIF proceeds with documented diligence.

CRC regulations (effective Oct 2025) explicitly require proactive testing before AI-driven personnel decisions.

AI insurance pricing perpetuates historical redlining patterns
Insurance $50M–$200M

A California insurer uses AI for homeowner's policy pricing. "Neighborhood risk scores" incorporate historical data reflecting redlining — neighborhoods historically underinsured now show higher claims-to-coverage ratios. Homeowners in historically Black and Hispanic neighborhoods in Los Angeles, Oakland, and Sacramento pay 20–40% more for equivalent coverage. The model doesn't use race, but the outcome is discriminatory.

WITHOUT EVALUATION

Actuarial soundness is not a defense against disparate impact under California Insurance Code § 679.71 and FEHA. DOI investigates. Class action by affected policyholders. Potential license revocation under Proposition 103. Exposure: $50M–$200M in premium refunds plus penalties.

WITH AI Asset Assurance EVALUATION

Shapley attribution identifies the neighborhood risk score as contributing 47% of the pricing disparity across racial groups. The evaluation prescribes alternative risk factors that achieve equivalent predictive accuracy without discriminatory impact. The insurer adjusts before the next pricing cycle.

Comparable: Obermeyer et al. (Science, 2019) — widely-used healthcare algorithm found to systematically disadvantage Black patients.

AI lending model produces 2.3x denial rate disparity across racial groups
Lending $75M–$750M

A California fintech uses AI for instant personal loan decisions. "Financial behavior" signals — small-dollar transaction frequency, deposit timing regularity, savings-to-spending ratio — correlate with race and gender. Gig workers and tipped employees (disproportionately Hispanic and female) are penalized. Denial rates for Black and Hispanic applicants are 2.3x higher than for white applicants with equivalent credit scores.

WITHOUT EVALUATION

The lender can prove the model doesn't use race. They can't explain why denial rates differ by 2.3x. DFPI investigates. Class action under UCL and CCFPL. 100,000 decisions × 15% discriminatory rate = 15,000 affected individuals. Federal ECOA exposure compounds the state claims.

WITH AI Asset Assurance EVALUATION

Shapley attribution identifies "deposit timing regularity" and "small-dollar transaction frequency" as driving 61% of the racial disparity. The lender replaces these with variables that predict creditworthiness without discriminatory proxy effects. Revised model maintains accuracy while reducing the denial gap.

Comparable: Apple Card / Goldman Sachs — 18-month DFS investigation over credit limit disparity driven by proxy variables.

AI patient triage deprioritizes non-native English speakers
Healthcare $5M–$50M per event

A California healthcare network uses AI to triage patient inquiries. The system was trained on historical data where physicians responded faster to clinical terminology. "Crushing substernal pressure" gets triaged faster than "my chest feels funny." Language patterns correlate with education, race, and socioeconomic status. Black and Hispanic patients experience 40% longer average triage times for equivalent urgency.

WITHOUT EVALUATION

A patient dies because the AI deprioritized their complaint based on language. This isn't a fine — it's wrongful death. Under the Unruh Civil Rights Act, which applies to healthcare, discrimination damage caps don't apply. Single event: $5M–$50M. Pattern across the network: hundreds of millions.

WITH AI Asset Assurance EVALUATION

Cohort analysis reveals the 40% triage time disparity. Shapley attribution identifies "linguistic complexity of symptom description" as the dominant factor. The network retrains the triage model on standardized symptom classifications rather than raw patient language, eliminating the proxy discrimination.

Comparable: Obermeyer et al. (Science, 2019) — algorithm used by hospitals nationwide found to systematically disadvantage Black patients.

Uncapped
Compensatory + punitive
damages under FEHA
Private
Right of action — anyone
can sue, not just the AG
Fee-shift
FEHA: plaintiff's attorneys
paid by defendant if they win

Every scenario shares the same structure: the AI doesn't use protected characteristics directly, but uses variables that correlate with them. The company didn't test. When harm materializes, they can't explain what caused it. An AI Asset Assurance evaluation costs $8K–$20K. A single employment discrimination lawsuit costs $100K–$500K in defense alone — before damages.

In California, testing documents your diligence.

The absence of bias testing is evidence of negligence. The presence of rigorous, independent testing with causal attribution documents your diligence. Don't wait for the claim.

Request evaluation