DAYS UNTIL EU AI ACT ENFORCEMENT — August 2, 2026. Also covering: NYC Local Law 144 (live now) · Colorado AI Act (June 30, 2026) · SR 11-7 model validation · GDPR Article 22.
4 laws · 2025–2026

Four laws. One message: if your AI touches patients, you need independent verification.

California has enacted the most comprehensive healthcare AI regulatory stack in the United States. SB 1120 prohibits AI-only coverage denials. AB 3030 requires GenAI disclosure. AB 489 bans AI from impersonating clinicians. AB 2575 eliminates the "the doctor should have caught it" defense for AI vendors. AI Asset Assurance provides independent evaluation of clinical AI systems for demographic bias, training data representativeness, and disparate impact — before a patient is harmed and before a lawsuit is filed.

Request evaluation
REGULATION STATUS
SB 1120Effective Jan 1, 2025
AB 3030Effective Jan 1, 2025
AB 489Effective Jan 1, 2026
AB 2575Advancing 2025-2026
AI Asset Assurance coverageFull bias evaluation
Liability trendRapidly expanding
$70M+
Goldman Sachs / Apple Card
AI algorithm penalties
Criminal
SB 1120 willful
violation penalties
No safe harbor
AB 2575 eliminates
"doctor override" defense
What the regulations require

Four laws, mapped to AI Asset Assurance capabilities

California's healthcare AI regulations form a layered compliance stack. Each law addresses a different dimension of AI risk. AI Asset Assurance addresses all four.

SB 1120 — Physicians Make Decisions Act

Effective January 1, 2025

Prohibits AI-only coverage denials. Requires licensed physician review. AI must use individual patient data, not just population statistics. AI algorithms must be open to audit. Periodic review of AI performance required. Criminal penalties for willful violations.

AI Asset Assurance: Evaluates whether utilization review AI produces different denial rates by demographic group. Shapley attribution identifies which patient characteristics drive denials. Prescriptive remediation specifies which variables to adjust and the projected improvement in equitable outcomes. Audit-ready certification provides the documentation SB 1120 requires.

AB 3030 — AI in Healthcare Services

Effective January 1, 2025

Requires health facilities and physician offices to disclose when GenAI generates patient communications. Disclosure must appear prominently at the beginning of written communications and throughout chat-based interactions. Provider-reviewed communications are exempt but must be documented.

AI Asset Assurance: Evaluates GenAI patient communication systems for demographic bias in tone, readability, and clinical accuracy across patient populations. Identifies whether AI-generated communications provide different quality of information to different demographic groups — a disparity that disclosure alone doesn't address.

AB 489 — AI Impersonation Prohibition

Effective January 1, 2026

Prohibits AI from using terms, titles, or design elements implying it is a licensed healthcare professional. Bans marketing language like "doctor-level" or "clinician-guided" unless genuinely supported by licensed professionals. Violations subject to Medical Board jurisdiction.

AI Asset Assurance: While AB 489 focuses on representation rather than bias, AI Asset Assurance evaluation demonstrates that the AI system's clinical recommendations have been independently verified for accuracy and demographic fairness — strengthening the case that the system operates under genuine clinical oversight.

AB 2575 — Healthcare AI Disclosure & Liability

Advancing through 2025-2026 session

Requires disclosure of AI developer, funding source, foundation model, intended population, known risks, and training data demographics including known biases. Eliminates the defense that a physician's failure to override AI output severs the vendor's liability. Workers must be told they can override AI output.

AI Asset Assurance: Directly addresses AB 2575's training data disclosure requirement. Evaluates demographic representativeness of AI training data against the patient population being served. Identifies known biases by protected characteristic. The certification report provides exactly the documentation AB 2575 requires — developer, methodology, demographic analysis, known limitations — in a format designed for regulatory and litigation use.
Real-world exposure

What goes wrong without independent AI evaluation

These scenarios illustrate how California's healthcare AI regulations create liability for health systems, AI vendors, and insurers.

The utilization review denial

A health plan's AI denies a prior authorization for a cancer treatment. The patient is a 62-year-old Black woman. SB 1120 requires the denial to be based on her individual clinical circumstances, not population statistics. Post-hoc analysis reveals the AI's denial rate for Black women over 60 is 2.3x higher than for white women with identical clinical profiles. The health plan had no independent evaluation of the AI's demographic performance.

With AI Asset Assurance: Pre-deployment evaluation identifies the demographic disparity before a single patient is affected. Shapley attribution shows age-race interaction is the primary driver. Remediation prescription specifies recalibration approach with projected 87% reduction in disparity.

The diagnostic AI miss

A clinical decision support AI recommends against further workup for a 45-year-old Latina patient presenting with chest pain. The physician follows the AI recommendation. The patient later suffers a cardiac event. Under AB 2575, the AI vendor cannot assert that the physician's failure to override the AI is a superseding cause. Discovery reveals the AI's training data was 78% white male. No independent evaluation of demographic performance was conducted.

With AI Asset Assurance: Training data representativeness evaluation flags the 78% white male skew before deployment. Per-demographic accuracy analysis identifies populations where the AI's recommendations are unreliable. The health system can require the vendor to retrain or restrict the AI's use for underrepresented populations.

The malpractice premium increase

A medical group deploys five AI clinical tools without independent evaluation. Their malpractice carrier, reviewing the group's risk profile at renewal, identifies unassessed AI exposure. Premiums increase 15-25% across the group. Three competing medical groups in the same market have AI Asset Assurance certifications and receive preferred rates.

With AI Asset Assurance: Annual evaluation of all clinical AI tools provides the malpractice carrier with documented risk assessment. Certification demonstrates the group has taken reasonable steps to identify and mitigate AI bias — the same standard the carrier applies to other clinical risk management practices.

The class action

A patient advocacy group identifies that a major health system's AI triage tool consistently assigns lower acuity scores to patients who list Spanish as their primary language. The AI is not making an explicit language-based decision — the disparity flows through proxy variables correlated with language preference. A class action is filed under California's Unruh Civil Rights Act. The health system's defense: "we didn't know." The plaintiff's response: "you didn't look."

With AI Asset Assurance: Proxy variable analysis identifies language-correlated disparities before deployment. Causal attribution traces the pathway from proxy variables to outcome disparity. The health system has documented evidence of diligent evaluation — transforming "we didn't know" into "we evaluated, identified, and remediated."
Who needs this

Three buyer types, one evaluation pipeline

California's healthcare AI regulations create compliance obligations for health systems, AI vendors, and the insurers who cover both.

Health systems & medical groups

Kaiser, Sutter, Cedars-Sinai, UCLA Health, Dignity Health, IPAs, and community health centers deploying clinical AI. You're deploying AI tools from vendors who may not have evaluated their systems for your patient population. AB 2575 requires you to disclose the AI's training data demographics to clinicians using it. SB 1120 requires your utilization review AI to be auditable. AI Asset Assurance evaluates each AI tool against your specific patient demographics — not the vendor's general claims.

Clinical AI vendors

Epic, Optum, Tempus, PathAI, Viz.ai, and clinical decision support startups selling to California health systems. Your buyers are about to ask for independent bias evaluation as a procurement condition. Under AB 2575, your liability isn't severed when a physician follows your AI's recommendation. Having an AI Asset Assurance certification before you go to market transforms a liability conversation into a trust conversation — and it differentiates you from vendors who haven't been independently evaluated.

Medical malpractice insurers

The Doctors Company, Cooperative of American Physicians, NORCAL Group, Medical Protective, and other carriers covering California physicians. Your insureds are deploying AI tools that create new liability exposure you can't currently quantify. AI Asset Assurance evaluation of your insureds' AI tools gives you the risk assessment data to price AI exposure accurately — and creates an incentive structure where physicians who evaluate their AI tools receive preferred rates.

Being honest about the competition

How AI Asset Assurance compares for healthcare AI evaluation

Healthcare AI evaluation is an emerging field. The honest truth: most existing governance platforms weren't designed for clinical AI's unique challenges — demographic representativeness, proxy variable analysis, and malpractice-grade documentation.

Capability Credo AI Holistic AI Internal review AI Asset Assurance
Training data demographic analysis Limited Manual review Ad hoc Automated per-demographic
Proxy variable detection Aggregate scores Statistical flags Rarely done Causal pathway tracing
Per-variable remediation prescription No No No Yes — per-variable fix + projected improvement
Structural independence (multi-validator) Single platform Single platform No independence 5 independent validators (BFT)
Malpractice-grade documentation Compliance reports Audit reports Internal memos Litigation-ready certification
Patient population matching No No Sometimes Evaluates AI against your patient demographics

Credo AI ($39M raised) and Holistic AI are strong governance platforms with healthcare capabilities. AI Asset Assurance is purpose-built for independent verification — it complements governance platforms rather than replacing them. A health system can use Credo AI for internal governance workflow and AI Asset Assurance for independent external verification.

Healthcare pricing

What it costs — and what it prevents

Healthcare engagements are priced at the Enterprise tier due to the clinical complexity and malpractice-grade documentation requirements.

Single AI tool evaluation
$60,000
Per clinical AI system

Full five-layer evaluation of one clinical AI tool. Training data demographic analysis. Per-demographic accuracy evaluation. Proxy variable detection. Prescriptive remediation. Litigation-ready certification report.

Multi-tool health system
$60K × n tools
Volume pricing available

A health system deploying 5-10 clinical AI tools needs each one evaluated against its patient population. Annual re-evaluation as tools update. Enterprise agreements with volume pricing for health systems with 5+ tools.

Founding partner
$40,000
First 10 healthcare orgs

33% discount for the first ten healthcare organizations. Priority access. Input into healthcare-specific evaluation methodology. Case study participation (optional). Name recognition as an AI Asset Assurance founding healthcare partner.

Regulatory timeline

California's healthcare AI regulatory stack

California is building the most comprehensive healthcare AI regulatory framework in the United States. Each new law adds requirements — and each requirement creates evaluation obligations.

Jan 2025
AB 3030 — GenAI Disclosure
Patient notification when GenAI generates clinical communications. Documentation of provider review.
Jan 2025
SB 1120 — Physicians Make Decisions Act
No AI-only denials. Individual patient data required. Algorithm open to audit. Criminal penalties for willful violations.
Jan 2025
AG Legal Advisory
California Attorney General formally advises that existing CA laws (consumer protection, anti-discrimination, CPOM doctrine) apply to AI in healthcare.
Jan 2026
AB 489 — AI Impersonation Ban
AI cannot present itself as a licensed clinician. Marketing restrictions on "doctor-level" and "clinician-guided" claims. Medical Board jurisdiction.
2026
AB 2575 — Healthcare AI Disclosure & Liability (advancing)
Training data demographic disclosure. Known bias disclosure. Override rights for clinicians. Vendor cannot blame physician for not overriding AI. The most significant healthcare AI liability provision in U.S. law.

Four laws. One evaluation. Complete documentation.

California's healthcare AI regulations are the most comprehensive in the nation. Independent evaluation with causal attribution and malpractice-grade documentation is the standard of care for AI deployment. Don't wait for the claim.

Request evaluation