Local Law 144 requires annual independent bias audits for any Automated Employment Decision Tool used for NYC hiring or promotion. Most auditors report selection rates and stop. AI Asset Assurance delivers Shapley decomposition that identifies why disparities exist — not just that they exist.
Request evaluationEvery requirement maps directly to an AI Asset Assurance capability. One evaluation pipeline covers everything you need.
An independent auditor must conduct a bias audit of the AEDT no more than one year prior to the use of the tool. The audit must assess disparate impact on the basis of race, ethnicity, and sex.
The audit must calculate selection rates and impact ratios for each demographic category, comparing to the most selected category. Results must be published on the employer's website.
The employer or employment agency must make publicly available a summary of the results of the most recent bias audit and the distribution date of the tool.
Candidates must be notified at least 10 business days before use of the AEDT, including which job qualifications the tool will assess and the data sources used.
These are the organizations most likely to need compliance evaluation under this regulation.
Any employer using an AEDT to screen candidates or make hiring decisions for positions located in New York City. Includes remote roles if the work location is NYC.
Staffing firms, recruiting agencies, and RPO providers using AI-powered screening tools for NYC-based positions. The agency bears audit responsibility.
ATS platforms, resume screeners, video interview analyzers, and assessment tools. While the employer bears legal responsibility, vendors face customer pressure to provide audit-ready systems.
AI Asset Assurance doesn't just find problems — it identifies root causes with Shapley attribution and prescribes exactly how to fix them.
Submit your AEDT's historical decision data or provide API access. AI Asset Assurance evaluates selection rates across all protected categories with five independent validators — exceeding the minimum independence requirement.
If disparate impact is found, Shapley attribution identifies the specific variables driving the disparity. Instead of "Group X has a 0.72 selection rate," you get "Variable A contributes 43% of the gap; Variable B contributes 31%." Now you know exactly what to fix.
Receive a publication-ready audit summary with all LL144-required fields, a candidate notice template, and a digitally signed certification report. Post the summary to your website and you're compliant.
Holistic AI, DCI Consulting, and BABL AI already perform the majority of LL144 bias audits. We're honest about what they do — and precise about what they don't.
| Capability | Holistic AI | DCI Consulting | BABL AI | Big 4 / Deloitte | AI Asset Assurance |
|---|---|---|---|---|---|
| Impact ratio calculation | ✓ | ✓ | ✓ | ✓ | ✓ |
| Intersectional analysis | ✓ | ✓ | ✓ | ✓ | ✓ |
| Publication-ready summary | ✓ | ✓ | ✓ | ✓ | ✓ |
| Market track record | ✓ ~20% of audits | ✓ ~20% of audits | ✓ ~16% of audits | ✓ Advisory | New entrant |
| Causal decomposition (WHY bias exists) | — | — | — | — | ✓ Shapley per-variable |
| Structural independence | Single platform | Single auditor | Single auditor | Single firm | ✓ 5 validators, BFT consensus |
| Prescriptive remediation | — | — | General advice | Narrative recs | ✓ Per-variable fix + projected improvement |
| Beyond-LL144 protected categories | Race + gender only | Race + gender only | Race + gender only | ✓ Broader | ✓ All EEOC + NYC HRL categories |
| Deployment integrity verification | — | — | — | — | ✓ Sealed deployment |
Holistic AI, DCI Consulting, and BABL AI collectively perform over half of all LL144 bias audits. They're established, competent, and deliver what the regulation requires: impact ratios, intersectional analysis, and publication-ready summaries. If your only goal is checking the LL144 compliance box at the lowest cost, they're proven options.
Where AI Asset Assurance is different — stated precisely: Every auditor above tells you IF bias exists by calculating selection rates. None of them tell you WHY it exists — which specific variable in your AI system causes the disparity, through what mechanism (direct effect vs proxy correlation), and what would happen if you changed it. A LL144 audit that says "your impact ratio for Hispanic women is 0.74" is compliant. But it doesn't help you fix the problem. AI Asset Assurance says "the disparity is 42% driven by the 'years of industry experience' variable acting as an age and gender proxy — removing it improves the ratio to 0.91 with a 3% reduction in predictive validity recoverable by substituting skills-based assessment scores."
Why this matters beyond the DCWP audit: LL144 compliance protects you from a $1,500/violation fine. It does NOT protect you from a Title VII or NYC Human Rights Law class action — and those lawsuits don't care about selection rates. They care about causation. When a plaintiff's attorney asks "what caused the disparity and what did you do about it?", a compliant LL144 audit has no answer. AI Asset Assurance does.
Additionally, LL144 only requires testing for race and gender. The EEOC enforces across disability, age, national origin, and religion. NYC Human Rights Law covers even more categories. A LL144 audit that passes on race and gender can still leave you exposed to an EEOC complaint on disability — exactly as happened with HireVue. AI Asset Assurance evaluates across all protected categories, not just the LL144 minimum.
Each scenario reflects documented patterns from EEOC enforcement, class action litigation, and DCWP investigations — applied to Automated Employment Decision Tools covered by Local Law 144.
LL144 penalty structure: $500 for the first violation. $500–$1,500 for each subsequent violation. Each use of an AEDT without a bias audit or without required candidate notice is a separate violation. A company screening 10,000 applicants per year without a compliant audit faces $5M–$15M in potential penalties — plus the audit results must be publicly posted, creating reputational exposure regardless of the findings.
A major NYC employer passes its annual LL144 bias audit — the selection rates by race and gender fall within the four-fifths rule. The audit is published. But the compliant aggregate numbers mask a problem: within the "passed" category, the AI scores Black candidates an average of 12 points lower than white candidates. The selection rate is compliant because the cutoff score is low enough. But when the employer raises the cutoff score six months later to reduce applicant volume, the disparity becomes outcome-determinative. The audit said the system was fair. The system had embedded bias the audit couldn't see.
The employer believes the system is fair because the audit passed. When the cutoff changes, the hidden bias becomes visible in outcomes. A rejected applicant sues under Title VII and NYC Human Rights Law. The employer's defense — "we had a compliant bias audit" — is undermined because the audit only measured selection rates, not the underlying score distribution. The published audit report is exhibit A in the plaintiff's case, showing the employer knew the tool was in use but didn't understand how it worked.
AI Asset Assurance doesn't just calculate selection rates — Shapley attribution decomposes the score distribution and identifies which variables drive the 12-point scoring gap between demographic groups. The employer knows the bias exists even when the aggregate selection rates are compliant. They can fix the underlying model, not just the threshold. The evaluation is robust to cutoff changes because it addresses root cause, not symptom.
The four-fifths rule is a screening tool, not a guarantee of fairness. Selection rate compliance can mask significant underlying bias in scoring distributions.
A NYC company uses AI-scored video interviews as a screening step for 8,000 annual applicants. The system analyzes speech patterns, facial expressions, and response structure. It penalizes atypical speech cadence (affecting non-native speakers and people with speech disabilities), limited facial expressiveness (affecting neurodiverse candidates), and non-standard response structures (affecting candidates from non-Western cultural backgrounds). The bias audit calculates selection rates by race and gender — but LL144 doesn't require testing for disability, national origin, or neurodiversity.
The bias audit passes on race and gender — LL144's required categories. But the EEOC files a complaint on disability discrimination grounds, just as it did against iTutorGroup ($365K settlement for age discrimination by AI). NYC Human Rights Law protects categories beyond LL144's audit requirements. The LL144 audit provided a false sense of security. 8,000 applicants × $1,500/violation = $12M in LL144 penalties alone, plus EEOC exposure and NYC HRL damages.
AI Asset Assurance evaluates beyond LL144's minimum requirements. Shapley attribution identifies "speech cadence variance," "facial expression range," and "response structure deviation" as the variables driving disparate outcomes across disability, national origin, and neurodiversity dimensions. The employer has evidence of comprehensive evaluation — not just LL144 compliance but genuine diligence across all protected categories under federal and NYC law.
Comparable: EEOC v. iTutorGroup ($365K settlement for AI age discrimination). ACLU complaint against HireVue/CVS for facial analysis "employability scores" affecting people with disabilities.
A NYC financial services firm uses an AI tool from a vendor to rank employees for promotion consideration. The vendor claims the tool has been "independently audited" but the audit was conducted by the vendor's preferred auditor on the vendor's training data — not on the employer's actual employee population. LL144 requires the audit to be conducted on the employer's data or the data the tool was trained on. The vendor's generic audit doesn't reflect the firm's specific workforce demographics, job categories, or performance distribution.
DCWP investigates and finds the bias audit doesn't meet LL144 requirements — it wasn't conducted on relevant data for the employer's specific use. The employer relied on the vendor's representation. Under LL144, the employer — not the vendor — is responsible for having a compliant audit. Every promotion decision made using the tool without a valid audit is a separate violation. The vendor's "audit" is worthless for compliance.
AI Asset Assurance evaluates the tool on the employer's actual workforce data — not the vendor's generic training set. The evaluation produces LL144-compliant impact ratios plus Shapley attribution showing which variables drive any disparities within this specific employee population. The audit is independently conducted, covers the right data, and exceeds LL144's minimum requirements with causal decomposition.
LL144 places the compliance obligation on the employer, not the vendor. A vendor-provided audit on vendor data does not satisfy the employer's obligation.
A compliant bias audit costs $8K–$20K. The penalty for not having one is $500–$1,500 per applicant. AI Asset Assurance delivers LL144 compliance plus Shapley causal attribution that standard auditors don't provide — so you don't just know IF there's bias, you know WHY and HOW TO FIX IT.
Every current auditor calculates selection rates. AI Asset Assurance decomposes the cause with Shapley attribution. Request an evaluation and see the difference.
Request evaluation