California's Civil Rights Council regulations make bias testing evidence admissible in employment discrimination lawsuits. If you test and find nothing, it supports your defense. If you test and fix what you find, it demonstrates diligence. If you don't test at all, the absence of testing is evidence of negligence. AI Asset Assurance provides independent bias testing with causal attribution and structural independence.
Request evaluationEvery requirement maps directly to an AI Asset Assurance capability. One evaluation pipeline covers everything you need.
California's Fair Employment and Housing Act prohibits employment discrimination. The Civil Rights Council regulations explicitly extend this to AI and algorithmic systems used for hiring, promotion, and termination decisions.
Bias testing results are admissible as evidence in discrimination proceedings. An employer who conducts testing and acts on findings demonstrates reasonable efforts. An employer who does not test faces an inference of negligence.
If disparate impact is found, employers must demonstrate that the AI system is consistent with business necessity and that no less discriminatory alternative is available.
Employers must maintain records of AI system usage, testing results, and any remedial actions taken. Records must be retained for the applicable statute of limitations period.
These are the organizations most likely to need compliance evaluation under this regulation.
Any employer with 5+ employees using AI for employment decisions in California. Includes remote employers hiring California residents. California has the broadest FEHA protections in the country.
Staffing agencies and RPO providers using AI screening tools for California positions. The agency and the client employer may both face liability.
ATS, assessment, and screening tool providers whose customers operate in California. Vendor liability theories are expanding, and customer contracts increasingly require bias testing.
AI Asset Assurance doesn't just find problems — it identifies root causes with Shapley attribution and prescribes exactly how to fix them.
Five structurally independent validators evaluate your AI system across every FEHA-protected category — race, sex, age, disability, and more. The evaluation identifies disparate impact before it becomes a lawsuit.
If disparate impact is found, Shapley attribution identifies exactly which variables drive it. This enables a targeted business necessity defense — "Variable X contributes 43% of the disparity and is directly job-related because..." — instead of defending a black box.
Receive a digitally signed certification report that serves as evidence of diligence. If a claim is filed, the report demonstrates: you tested, you understood the results at a causal level, and you took appropriate action. This is the defense that matters.
Governance platforms help you build internal compliance programs — but they sell software to the companies they evaluate. Consulting auditors provide independence — but take weeks per engagement. AI Asset Assurance combines both: genuine structural independence delivered through a streamlined platform, with patent-protected evaluation methods no other provider offers.
| Capability | No testing | Internal review | Law firm audit | AI Asset Assurance |
|---|---|---|---|---|
| Evidentiary value | Inference of negligence | Limited — self-assessment | ✓ Expert opinion | ✓ Independent, decomposed |
| Causal attribution | — | — | — | ✓ Shapley decomposition |
| Business necessity support | — | General analysis | ✓ Legal argument | ✓ Variable-level evidence |
| Structural independence | — | — | Single firm | ✓ 5 validators |
| Remediation prescription | — | — | Legal advice | ✓ Technical fix instructions |
| Ongoing monitoring | — | Ad hoc | Annual review | ✓ Quarterly (Standard) |
| Per-request verification | — | — | — | ✓ Real-Time tier |
In California, the evidentiary standard favors documented diligence. Real-Time verification creates a per-decision compliance record that transforms your litigation posture.
Full FEHA-aligned evaluation with Shapley attribution. Documented evidence of diligence. Business necessity analysis. Litigation-ready reporting. Strong defense — but a plaintiff's attorney can argue the AI changed between evaluations.
Every hiring decision, every performance score, every promotion recommendation is verified at the moment it's made. The Shapley attribution for each individual decision is logged. When a plaintiff asks "was the AI fair on the day my client was rejected?" — you have the exact causal decomposition for that specific decision, not an aggregate from last quarter.
The litigation difference: A plaintiff's attorney deposes your head of HR. "How do you know the AI was fair when it rejected my client on March 15?" With annual evaluation: "We tested it last January." With Real-Time: "Here is the verification receipt and Shapley attribution for that exact decision. The AI was the same certified system, and these five variables drove the outcome." That's the difference between surviving a motion to dismiss and going to trial.
These aren't hypotheticals. Each scenario reflects documented patterns from active litigation, regulatory enforcement, and published research — applied to common AI use cases in California.
Why California is different: FEHA covers more protected categories than federal law. Private right of action means anyone can sue — not just the AG. Compensatory and punitive damages are uncapped. The CRC regulations explicitly make bias testing evidence admissible — creating an evidentiary asymmetry where having testing helps your defense and not having testing builds the plaintiff's case.
A tech company uses an ATS with AI screening to process 50,000 applications per year. The AI was trained on historical hiring data that reflects past demographic patterns. A "university prestige" variable acts as a proxy for race and socioeconomic status. The rejection rate for one group is 78% while another is 52%. Under FEHA, rejected applicants don't need to prove intent — disparate impact is sufficient.
The company can't explain why the disparity exists. No testing means no evidence of diligence. Class action representing every affected applicant over four years. FEHA is a fee-shifting statute — plaintiff's attorneys get paid by the defendant.
Shapley attribution identifies the "university prestige" variable as contributing 34% of the disparity — a proxy for race. The company reweights it before processing applications. The disparity is addressed before a single applicant is harmed.
Comparable: Mobley v. Workday — class action certified May 2025, potentially covering millions of applicants.
A company with 3,000 employees uses AI-assisted performance scoring during a reduction in force. Communication pattern analysis penalizes non-native English patterns and fewer internal network connections — variables correlating with national origin, disability, and sex. 400 employees are terminated, disproportionately affecting women, employees over 40, and employees of Asian and Hispanic origin.
CRC regulations required proactive testing before AI-driven termination decisions. The company didn't test. Each terminated employee has a private right of action. Back pay for 400 employees at $120K avg × 2 years = $96M before emotional distress and punitive damages.
Cohort analysis flags the disparate impact before the RIF executes. Shapley attribution isolates "communication pattern" metrics as the primary driver. The company adjusts the scoring model, re-runs evaluation, and confirms equitable outcomes. The RIF proceeds with documented diligence.
CRC regulations (effective Oct 2025) explicitly require proactive testing before AI-driven personnel decisions.
A California insurer uses AI for homeowner's policy pricing. "Neighborhood risk scores" incorporate historical data reflecting redlining — neighborhoods historically underinsured now show higher claims-to-coverage ratios. Homeowners in historically Black and Hispanic neighborhoods in Los Angeles, Oakland, and Sacramento pay 20–40% more for equivalent coverage. The model doesn't use race, but the outcome is discriminatory.
Actuarial soundness is not a defense against disparate impact under California Insurance Code § 679.71 and FEHA. DOI investigates. Class action by affected policyholders. Potential license revocation under Proposition 103. Exposure: $50M–$200M in premium refunds plus penalties.
Shapley attribution identifies the neighborhood risk score as contributing 47% of the pricing disparity across racial groups. The evaluation prescribes alternative risk factors that achieve equivalent predictive accuracy without discriminatory impact. The insurer adjusts before the next pricing cycle.
Comparable: Obermeyer et al. (Science, 2019) — widely-used healthcare algorithm found to systematically disadvantage Black patients.
A California fintech uses AI for instant personal loan decisions. "Financial behavior" signals — small-dollar transaction frequency, deposit timing regularity, savings-to-spending ratio — correlate with race and gender. Gig workers and tipped employees (disproportionately Hispanic and female) are penalized. Denial rates for Black and Hispanic applicants are 2.3x higher than for white applicants with equivalent credit scores.
The lender can prove the model doesn't use race. They can't explain why denial rates differ by 2.3x. DFPI investigates. Class action under UCL and CCFPL. 100,000 decisions × 15% discriminatory rate = 15,000 affected individuals. Federal ECOA exposure compounds the state claims.
Shapley attribution identifies "deposit timing regularity" and "small-dollar transaction frequency" as driving 61% of the racial disparity. The lender replaces these with variables that predict creditworthiness without discriminatory proxy effects. Revised model maintains accuracy while reducing the denial gap.
Comparable: Apple Card / Goldman Sachs — 18-month DFS investigation over credit limit disparity driven by proxy variables.
A California healthcare network uses AI to triage patient inquiries. The system was trained on historical data where physicians responded faster to clinical terminology. "Crushing substernal pressure" gets triaged faster than "my chest feels funny." Language patterns correlate with education, race, and socioeconomic status. Black and Hispanic patients experience 40% longer average triage times for equivalent urgency.
A patient dies because the AI deprioritized their complaint based on language. This isn't a fine — it's wrongful death. Under the Unruh Civil Rights Act, which applies to healthcare, discrimination damage caps don't apply. Single event: $5M–$50M. Pattern across the network: hundreds of millions.
Cohort analysis reveals the 40% triage time disparity. Shapley attribution identifies "linguistic complexity of symptom description" as the dominant factor. The network retrains the triage model on standardized symptom classifications rather than raw patient language, eliminating the proxy discrimination.
Comparable: Obermeyer et al. (Science, 2019) — algorithm used by hospitals nationwide found to systematically disadvantage Black patients.
Every scenario shares the same structure: the AI doesn't use protected characteristics directly, but uses variables that correlate with them. The company didn't test. When harm materializes, they can't explain what caused it. An AI Asset Assurance evaluation costs $8K–$20K. A single employment discrimination lawsuit costs $100K–$500K in defense alone — before damages.
The absence of bias testing is evidence of negligence. The presence of rigorous, independent testing with causal attribution documents your diligence. Don't wait for the claim.
Request evaluation