Colorado SB 21-169 requires deployers of high-risk AI to use reasonable care to protect consumers from algorithmic discrimination. Annual impact assessments are mandatory. AI Asset Assurance delivers the impact assessment with causal attribution that demonstrates reasonable care.
Request evaluationEvery requirement maps directly to an AI Asset Assurance capability. One evaluation pipeline covers everything you need.
Deployers must use reasonable care to protect consumers from known or reasonably foreseeable risks of algorithmic discrimination. This is a duty-of-care standard, not a prescriptive checklist.
Deployers must complete an impact assessment of high-risk AI systems on an annual basis or any time a substantial modification is made. The assessment must be available to the AG upon request.
Deployers must notify consumers that the high-risk AI system is in use, provide a description of its purpose, and provide contact information for the deployer.
Consumers must be provided the opportunity to appeal an adverse decision made by the AI system and to request human review.
These are the organizations most likely to need compliance evaluation under this regulation.
Any AI system used for employment decisions affecting Colorado residents — hiring, promotion, termination, compensation. Includes remote positions available to Colorado applicants.
AI used for lending decisions, credit scoring, and financial product eligibility for Colorado consumers. Banks, credit unions, and fintech platforms are in scope.
Algorithmic underwriting and pricing decisions for Colorado policyholders. Health, life, auto, and property insurance using AI for risk assessment or claims processing.
AI systems that influence healthcare access, treatment recommendations, or resource allocation for Colorado patients.
AI used for tenant screening, rental pricing, or mortgage decisions affecting Colorado residents. Algorithmic rent-setting tools are increasingly scrutinized.
AI used for admissions, financial aid, or academic assessment decisions at Colorado educational institutions.
AI Asset Assurance doesn't just find problems — it identifies root causes with Shapley attribution and prescribes exactly how to fix them.
AI Asset Assurance evaluates your AI system across all protected categories defined by Colorado law — race, color, ethnicity, sex, religion, age, disability, and more. Five independent validators assess bias, accuracy, and fairness simultaneously.
Shapley attribution decomposes any algorithmic discrimination to its root variables. The impact assessment doesn't just say "disparate impact exists" — it explains why, which variables drive it, and how much each contributes. This is what "reasonable care" looks like.
Receive a Colorado AI Act-formatted impact assessment, consumer disclosure template, and remediation prescriptions. If the AG inquires, you have decomposed evidence of reasonable care — not just a checkbox audit.
Governance platforms help you build internal compliance programs — but they sell software to the companies they evaluate. Consulting auditors provide independence — but take weeks per engagement. AI Asset Assurance combines both: genuine structural independence delivered through a streamlined platform, with patent-protected evaluation methods no other provider offers.
| Capability | Internal assessment | Law firm review | Warden AI | AI Asset Assurance |
|---|---|---|---|---|
| Annual impact assessment | ✓ Self-reported | ✓ Legal review | ✓ Automated | ✓ Independent, decomposed |
| Causal attribution | — | — | — | ✓ Shapley decomposition |
| "Reasonable care" evidence | Weak | Moderate | Moderate | ✓ Independent, decomposed |
| Modification detection | Manual process | — | — | ✓ Automatic via sealed deployment |
| Remediation prescription | — | Legal recommendations | General guidance | ✓ Variable-level fix instructions |
| Consumer disclosure template | — | ✓ | ✓ | ✓ Pre-populated |
Each scenario reflects documented patterns from active litigation and regulatory enforcement nationally — applied to consequential AI decisions covered by the Colorado AI Act.
Why Colorado's framework is distinct: The "reasonable care" standard is a duty-of-care obligation — not a checklist. Evidence of what you did (or didn't do) to evaluate your AI determines your liability. The AG has exclusive enforcement with penalties up to $20,000 per violation. You must report algorithmic discrimination to the AG within 90 days of discovery. But there's an affirmative defense: following NIST AI RMF or ISO 42001 and promptly curing violations. An AI Asset Assurance evaluation is documented evidence of reasonable care that activates that defense.
A Colorado tech company uses AI to screen 15,000 applicants annually. The model weighs "technology fluency" signals — years since last certification, social media activity patterns, and familiarity with recent frameworks. These variables correlate strongly with age. Applicants over 40 are rejected at 2.1x the rate of younger applicants with equivalent qualifications. Under the Colorado AI Act, this is a "consequential decision" in employment made by a "high-risk AI system" — squarely in scope.
No annual impact assessment was completed. The company cannot demonstrate reasonable care. Once the disparity surfaces — through applicant complaints or the AG's own analysis — the company must disclose to the AG within 90 days. Without documented evaluation, the affirmative defense is unavailable. At $20,000 per violation across thousands of affected applicants, penalties compound rapidly.
Shapley attribution identifies "years since last certification" as contributing 38% of the age-related disparity. The company replaces it with a skills-based assessment that measures actual capability rather than recency. The annual impact assessment is documented, activating the affirmative defense under NIST AI RMF compliance.
Comparable: Mobley v. Workday — class action certified May 2025 alleging age discrimination by AI hiring platform across hundreds of thousands of applicants.
A property management company uses AI to screen tenants across 2,000 units in Denver and Colorado Springs. The system scores applicants on income stability, credit patterns, and "rental risk" signals. It doesn't reject voucher holders outright, but the scoring model penalizes income patterns typical of voucher recipients — irregular payment amounts, government-source income indicators, and rental history gaps. The result: applicants using housing choice vouchers are approved at 35% the rate of market-rate applicants with equivalent credit scores. In Colorado, source of income is a protected class.
The company says the model doesn't consider voucher status. But the proxy variables produce the same discriminatory outcome. The AG investigates after a pattern of fair housing complaints. No impact assessment exists. The disparate impact on a protected class (source of income) triggers enforcement. The property company faces penalties across every affected applicant across all properties.
Cohort analysis reveals the 35% vs. market-rate approval disparity. Shapley attribution identifies "income regularity" and "payment source type" as the variables driving 52% of the gap — proxies for voucher status. The company adjusts the model to evaluate creditworthiness independently of income source characteristics. Documented impact assessment serves as evidence of reasonable care.
Comparable: SafeRent settled for $2M+ over AI tenant screening with disparate racial impact. PERQ settled after AI leasing agent issued blanket rejections to voucher holders.
A health insurer uses AI to process prior authorization requests for post-acute care. The model predicts "expected recovery timeline" based on historical claims data. But the training data overrepresents urban patients with access to physical therapy, specialist follow-ups, and rehabilitation facilities. Rural Colorado patients — who face longer travel times, fewer specialists, and limited rehab options — show "slower recovery" patterns in the data. The AI learns to deny extended care authorization for rural patients at higher rates, despite their clinical need being equivalent or greater.
Rural patients in Western Slope and Eastern Plains communities are denied care at disproportionate rates. Geographic discrimination correlates with race and ethnicity (rural Colorado's Hispanic population is disproportionately affected). The insurer must report to the AG within 90 days of discovering the disparity. Without a documented impact assessment showing reasonable care, the affirmative defense fails.
Delta analysis between the AI's authorization recommendations and clinical assessments reveals a systematic urban-rural gap. Shapley attribution identifies "proximity to rehab facilities" and "historical recovery rate by ZIP code" as the variables driving 44% of the disparity. The insurer adjusts the model to weight clinical need independently of geographic access patterns. The evaluation report documents the finding and fix.
Comparable: UnitedHealth/nH Predict — class action alleging AI algorithm overrode physicians and denied medically necessary care. 90% of appealed denials reversed. Discovery ordered March 2026.
A Colorado credit union uses AI for personal loan decisions. The model evaluates "income stability" by measuring month-to-month income variance. Colorado's ski resort and outdoor recreation economy employs thousands of seasonal workers in mountain communities — workers whose income is high in winter and low in summer, or vice versa. The AI treats income variance as a risk factor. Seasonal workers in Vail, Aspen, Breckenridge, and Telluride are denied at 1.8x the rate of year-round earners with identical annual income. This workforce is disproportionately Hispanic and under 30.
The credit union can show the model doesn't use race or age. But the "income stability" metric produces disparate impact across both protected categories — race and age — because of Colorado's specific economic structure. The AG investigates. No impact assessment exists to demonstrate reasonable care. No affirmative defense available. Penalties at $20,000 per affected borrower decision.
Shapley attribution identifies "monthly income variance" as contributing 41% of the denial disparity. The evaluation recommends using annualized income with seasonal adjustment rather than month-to-month stability. The adjusted model maintains predictive accuracy while reducing the seasonal worker penalty. The documented evaluation activates the NIST AI RMF affirmative defense.
Note: Colorado banks and credit unions subject to federal prudential regulators may qualify for exemption — but only if the federal guidance meets the Act's criteria. The exemption is not automatic.
A Colorado university uses AI to rank scholarship applicants. The model evaluates academic potential using high school GPA, test scores, extracurricular depth, and "academic engagement signals" — AP course load, research experience, academic competition participation. These signals correlate with school district resources, which correlate with family income, which correlates with race. First-generation college applicants from underfunded districts in Pueblo, Aurora, and rural communities are systematically ranked lower. Under the Act, education decisions are explicitly "consequential decisions" in scope.
The university receives AG inquiries after scholarship outcomes show racial and socioeconomic disparities. No impact assessment was completed. The "academic engagement" signals are proxies for district funding, which is a proxy for race and income. The university can't explain which variables drive the disparity or demonstrate any effort to detect it. The 60-day cure period is available, but without evaluation infrastructure, they can't identify the root cause within 60 days.
Shapley attribution identifies "AP course load" and "academic competition participation" as contributing 36% of the socioeconomic disparity — both reflect school district resources, not individual potential. The university adjusts the model to normalize for district opportunity levels. The annual impact assessment is documented and filed. If the AG inquires, the university has evidence of proactive reasonable care and qualifies for the affirmative defense.
The Colorado AI Act explicitly covers education decisions. Universities using AI for admissions, financial aid, or assessment are in scope regardless of size.
Colorado's framework is built on a single question: did you exercise reasonable care? An AI Asset Assurance evaluation is the documented answer. It costs $8K–$20K. It produces the annual impact assessment the Act requires. It activates the affirmative defense. And it identifies problems before the AG does — giving you the 60-day cure window with an actual plan to cure.
The reasonable care standard means evidence matters. An AI Asset Assurance impact assessment with causal attribution is documented evidence of diligence that regulators can review.
Request evaluation