DAYS UNTIL EU AI ACT ENFORCEMENT — August 2, 2026. Also covering: NYC Local Law 144 (live now) · Colorado AI Act (June 30, 2026) · SR 11-7 model validation · GDPR Article 22.
Enforcing · Annual audits required

NYC Local Law 144 is live. Your AEDT bias audit should go deeper than selection rates.

Local Law 144 requires annual independent bias audits for any Automated Employment Decision Tool used for NYC hiring or promotion. Most auditors report selection rates and stop. AI Asset Assurance delivers Shapley decomposition that identifies why disparities exist — not just that they exist.

Request evaluation
REGULATION STATUS
StatusEnforced — live now
ScopeAEDTs for NYC employment
Audit frequencyAnnual
AI Asset Assurance coverageFull LL144 compliance
Current auditor marketBABL AI, ORCAA, Big 4
What the regulation requires

Requirements and how AI Asset Assurance addresses each one

Every requirement maps directly to an AI Asset Assurance capability. One evaluation pipeline covers everything you need.

Independent bias audit

§ 20-871(b)

An independent auditor must conduct a bias audit of the AEDT no more than one year prior to the use of the tool. The audit must assess disparate impact on the basis of race, ethnicity, and sex.

AI Asset Assurance: Five structurally independent validators provide a level of structural independence that exceeds the minimum audit requirement. Structural independence means each validator is legally separate, runs on separate infrastructure, and uses a different evaluation model. No current LL144 auditor offers comparable independence.

Impact ratio calculation

§ 20-871(b)(2)

The audit must calculate selection rates and impact ratios for each demographic category, comparing to the most selected category. Results must be published on the employer's website.

AI Asset Assurance: Standard impact ratio calculations are included in every evaluation. But AI Asset Assurance goes further: Shapley attribution decomposes each impact ratio to show which specific variables drive the disparity. When the selection rate for a group is 0.72, AI Asset Assurance tells you that variable X contributes 43% of the gap and variable Y contributes 31%.

Published summary of results

§ 20-871(b)(3)

The employer or employment agency must make publicly available a summary of the results of the most recent bias audit and the distribution date of the tool.

AI Asset Assurance: The evaluation report includes a publication-ready summary with all required fields: source and explanation of data, number of individuals assessed, selection rates, impact ratios, and the date the AEDT was most recently used. Ready to post directly to your website.

Notice to candidates

§ 20-871(b)(1)

Candidates must be notified at least 10 business days before use of the AEDT, including which job qualifications the tool will assess and the data sources used.

AI Asset Assurance: The evaluation report includes a candidate notice template with the required disclosures pre-populated based on your specific AEDT configuration. Modify as needed and deploy directly.

Penalties: $500 per violation for the first offense, $1,500 for subsequent offenses

Each day of non-compliance and each individual affected constitutes a separate violation. For a company screening 100 candidates per day, penalties compound to $50,000+ per day.

Who is affected

Is your organization in scope?

These are the organizations most likely to need compliance evaluation under this regulation.

Employers hiring in NYC

Any employer using an AEDT to screen candidates or make hiring decisions for positions located in New York City. Includes remote roles if the work location is NYC.

DIRECTLY IN SCOPE

Employment agencies

Staffing firms, recruiting agencies, and RPO providers using AI-powered screening tools for NYC-based positions. The agency bears audit responsibility.

DIRECTLY IN SCOPE

HR technology vendors

ATS platforms, resume screeners, video interview analyzers, and assessment tools. While the employer bears legal responsibility, vendors face customer pressure to provide audit-ready systems.

INDIRECT — CUSTOMER DEMAND
How AI Asset Assurance works

From evaluation to certification in three steps

AI Asset Assurance doesn't just find problems — it identifies root causes with Shapley attribution and prescribes exactly how to fix them.

STEP 01

Evaluate your AEDT

Submit your AEDT's historical decision data or provide API access. AI Asset Assurance evaluates selection rates across all protected categories with five independent validators — exceeding the minimum independence requirement.

STEP 02

Decompose any disparities

If disparate impact is found, Shapley attribution identifies the specific variables driving the disparity. Instead of "Group X has a 0.72 selection rate," you get "Variable A contributes 43% of the gap; Variable B contributes 31%." Now you know exactly what to fix.

STEP 03

Certify and publish

Receive a publication-ready audit summary with all LL144-required fields, a candidate notice template, and a digitally signed certification report. Post the summary to your website and you're compliant.

AI Asset Assurance vs alternatives

The LL144 audit market is established. Here's where we fit.

Holistic AI, DCI Consulting, and BABL AI already perform the majority of LL144 bias audits. We're honest about what they do — and precise about what they don't.

CapabilityHolistic AIDCI ConsultingBABL AIBig 4 / DeloitteAI Asset Assurance
Impact ratio calculation
Intersectional analysis
Publication-ready summary
Market track record✓ ~20% of audits✓ ~20% of audits✓ ~16% of audits✓ AdvisoryNew entrant
Causal decomposition (WHY bias exists)✓ Shapley per-variable
Structural independenceSingle platformSingle auditorSingle auditorSingle firm✓ 5 validators, BFT consensus
Prescriptive remediationGeneral adviceNarrative recs✓ Per-variable fix + projected improvement
Beyond-LL144 protected categoriesRace + gender onlyRace + gender onlyRace + gender only✓ Broader✓ All EEOC + NYC HRL categories
Deployment integrity verification✓ Sealed deployment
Being honest about the competition

Holistic AI, DCI Consulting, and BABL AI collectively perform over half of all LL144 bias audits. They're established, competent, and deliver what the regulation requires: impact ratios, intersectional analysis, and publication-ready summaries. If your only goal is checking the LL144 compliance box at the lowest cost, they're proven options.

Where AI Asset Assurance is different — stated precisely: Every auditor above tells you IF bias exists by calculating selection rates. None of them tell you WHY it exists — which specific variable in your AI system causes the disparity, through what mechanism (direct effect vs proxy correlation), and what would happen if you changed it. A LL144 audit that says "your impact ratio for Hispanic women is 0.74" is compliant. But it doesn't help you fix the problem. AI Asset Assurance says "the disparity is 42% driven by the 'years of industry experience' variable acting as an age and gender proxy — removing it improves the ratio to 0.91 with a 3% reduction in predictive validity recoverable by substituting skills-based assessment scores."

Why this matters beyond the DCWP audit: LL144 compliance protects you from a $1,500/violation fine. It does NOT protect you from a Title VII or NYC Human Rights Law class action — and those lawsuits don't care about selection rates. They care about causation. When a plaintiff's attorney asks "what caused the disparity and what did you do about it?", a compliant LL144 audit has no answer. AI Asset Assurance does.

Additionally, LL144 only requires testing for race and gender. The EEOC enforces across disability, age, national origin, and religion. NYC Human Rights Law covers even more categories. A LL144 audit that passes on race and gender can still leave you exposed to an EEOC complaint on disability — exactly as happened with HireVue. AI Asset Assurance evaluates across all protected categories, not just the LL144 minimum.

Real exposure

What NYC employers face without independent AI bias audits

Each scenario reflects documented patterns from EEOC enforcement, class action litigation, and DCWP investigations — applied to Automated Employment Decision Tools covered by Local Law 144.

LL144 penalty structure: $500 for the first violation. $500–$1,500 for each subsequent violation. Each use of an AEDT without a bias audit or without required candidate notice is a separate violation. A company screening 10,000 applicants per year without a compliant audit faces $5M–$15M in potential penalties — plus the audit results must be publicly posted, creating reputational exposure regardless of the findings.

Bias audit shows compliant selection rates — but hides proxy discrimination
Hiring Class action exposure

A major NYC employer passes its annual LL144 bias audit — the selection rates by race and gender fall within the four-fifths rule. The audit is published. But the compliant aggregate numbers mask a problem: within the "passed" category, the AI scores Black candidates an average of 12 points lower than white candidates. The selection rate is compliant because the cutoff score is low enough. But when the employer raises the cutoff score six months later to reduce applicant volume, the disparity becomes outcome-determinative. The audit said the system was fair. The system had embedded bias the audit couldn't see.

WITHOUT CAUSAL ANALYSIS

The employer believes the system is fair because the audit passed. When the cutoff changes, the hidden bias becomes visible in outcomes. A rejected applicant sues under Title VII and NYC Human Rights Law. The employer's defense — "we had a compliant bias audit" — is undermined because the audit only measured selection rates, not the underlying score distribution. The published audit report is exhibit A in the plaintiff's case, showing the employer knew the tool was in use but didn't understand how it worked.

WITH AI Asset Assurance EVALUATION

AI Asset Assurance doesn't just calculate selection rates — Shapley attribution decomposes the score distribution and identifies which variables drive the 12-point scoring gap between demographic groups. The employer knows the bias exists even when the aggregate selection rates are compliant. They can fix the underlying model, not just the threshold. The evaluation is robust to cutoff changes because it addresses root cause, not symptom.

The four-fifths rule is a screening tool, not a guarantee of fairness. Selection rate compliance can mask significant underlying bias in scoring distributions.

Video interview AI penalizes non-native English speakers and people with disabilities
Hiring $5M–$15M + EEOC

A NYC company uses AI-scored video interviews as a screening step for 8,000 annual applicants. The system analyzes speech patterns, facial expressions, and response structure. It penalizes atypical speech cadence (affecting non-native speakers and people with speech disabilities), limited facial expressiveness (affecting neurodiverse candidates), and non-standard response structures (affecting candidates from non-Western cultural backgrounds). The bias audit calculates selection rates by race and gender — but LL144 doesn't require testing for disability, national origin, or neurodiversity.

WITHOUT CAUSAL ANALYSIS

The bias audit passes on race and gender — LL144's required categories. But the EEOC files a complaint on disability discrimination grounds, just as it did against iTutorGroup ($365K settlement for age discrimination by AI). NYC Human Rights Law protects categories beyond LL144's audit requirements. The LL144 audit provided a false sense of security. 8,000 applicants × $1,500/violation = $12M in LL144 penalties alone, plus EEOC exposure and NYC HRL damages.

WITH AI Asset Assurance EVALUATION

AI Asset Assurance evaluates beyond LL144's minimum requirements. Shapley attribution identifies "speech cadence variance," "facial expression range," and "response structure deviation" as the variables driving disparate outcomes across disability, national origin, and neurodiversity dimensions. The employer has evidence of comprehensive evaluation — not just LL144 compliance but genuine diligence across all protected categories under federal and NYC law.

Comparable: EEOC v. iTutorGroup ($365K settlement for AI age discrimination). ACLU complaint against HireVue/CVS for facial analysis "employability scores" affecting people with disabilities.

Vendor-provided AI promotion tool lacks audit — employer is liable
Promotion $500–$1,500/violation

A NYC financial services firm uses an AI tool from a vendor to rank employees for promotion consideration. The vendor claims the tool has been "independently audited" but the audit was conducted by the vendor's preferred auditor on the vendor's training data — not on the employer's actual employee population. LL144 requires the audit to be conducted on the employer's data or the data the tool was trained on. The vendor's generic audit doesn't reflect the firm's specific workforce demographics, job categories, or performance distribution.

WITHOUT PROPER AUDIT

DCWP investigates and finds the bias audit doesn't meet LL144 requirements — it wasn't conducted on relevant data for the employer's specific use. The employer relied on the vendor's representation. Under LL144, the employer — not the vendor — is responsible for having a compliant audit. Every promotion decision made using the tool without a valid audit is a separate violation. The vendor's "audit" is worthless for compliance.

WITH AI Asset Assurance EVALUATION

AI Asset Assurance evaluates the tool on the employer's actual workforce data — not the vendor's generic training set. The evaluation produces LL144-compliant impact ratios plus Shapley attribution showing which variables drive any disparities within this specific employee population. The audit is independently conducted, covers the right data, and exceeds LL144's minimum requirements with causal decomposition.

LL144 places the compliance obligation on the employer, not the vendor. A vendor-provided audit on vendor data does not satisfy the employer's obligation.

$1,500
Per violation
per candidate
$15M+
Potential penalties for
10,000 applicants
Public
Audit results must
be published

A compliant bias audit costs $8K–$20K. The penalty for not having one is $500–$1,500 per applicant. AI Asset Assurance delivers LL144 compliance plus Shapley causal attribution that standard auditors don't provide — so you don't just know IF there's bias, you know WHY and HOW TO FIX IT.

Your next LL144 audit should tell you why — not just what.

Every current auditor calculates selection rates. AI Asset Assurance decomposes the cause with Shapley attribution. Request an evaluation and see the difference.

Request evaluation