DAYS UNTIL EU AI ACT ENFORCEMENT — August 2, 2026. Also covering: NYC Local Law 144 (live now) · Colorado AI Act (June 30, 2026) · SR 11-7 model validation · GDPR Article 22.
Active · All 50 states

SR 11-7 requires independent model validation. AI Asset Assurance delivers structural independence no internal team can match.

Federal Reserve SR 11-7 and OCC 2011-12 require banks and financial institutions to maintain effective model risk management, including independent validation of all models used for material decisions. AI models in lending, credit scoring, fraud detection, and trading are increasingly in scope. AI Asset Assurance provides structurally independent validation with causal attribution that satisfies examination standards.

Request evaluation
REGULATION STATUS
StatusActive — ongoing
RegulatorsFed, OCC, FDIC
ScopeAll material models
AI Asset Assurance coverageIndependent validation
Examination focusAI/ML models increasing
What the regulation requires

Requirements and how AI Asset Assurance addresses each one

Every requirement maps directly to an AI Asset Assurance capability. One evaluation pipeline covers everything you need.

Independent model validation

SR 11-7 §§ IV.3–4

Model validation must be conducted by parties independent of the model development process and business areas that use the model. Validators must have appropriate incentives, competence, and influence.

AI Asset Assurance: Five-fold structural independence (legal entity, infrastructure, model provenance, evaluation criteria, economic) exceeds SR 11-7's independence requirement. Each validator is a separate organization running on separate infrastructure — not just a separate team within the same bank.

Outcomes analysis

SR 11-7 § IV.4

Validation must include analysis of model outcomes — comparing model outputs to actual results to assess accuracy and identify potential issues including bias.

AI Asset Assurance: Behavioral fingerprinting and delta analysis compare model outputs across demographic cohorts, input distributions, and edge cases. Shapley attribution decomposes outcome differences to specific input variables — essential for fair lending examination defense.

Ongoing monitoring

SR 11-7 § V

Banks must have a process to track model performance and identify potential problems in a timely manner. Monitoring should be tailored to the model's materiality and complexity.

AI Asset Assurance: Real-Time tier provides per-request behavioral verification. Canary probes detect model drift, configuration changes, and accuracy degradation. Monthly compliance reports document ongoing performance — ready for examiner review.

Model inventory and documentation

SR 11-7 § VI

Banks must maintain a comprehensive model inventory and documentation of all models, including AI/ML systems, with information about purpose, limitations, and validation status.

AI Asset Assurance: Each evaluation produces a detailed certification report that serves as model documentation — including system architecture characterization, behavioral profile, limitation analysis, and validation status. Integrates into existing model inventory frameworks.

Examination risk: MRAs, MRIAs, consent orders, and enforcement actions

Examination findings can result in Matters Requiring Attention (MRAs), Matters Requiring Immediate Attention (MRIAs), consent orders, civil money penalties, and restrictions on business activities. AI model validation gaps are an increasing focus of examinations.

Who is affected

Is your organization in scope?

These are the organizations most likely to need compliance evaluation under this regulation.

Banks and bank holding companies

All banks supervised by the Federal Reserve and OCC that use AI models for material decisions — lending, credit scoring, fraud detection, BSA/AML, trading, and capital planning.

DIRECTLY SUPERVISED

Fintech lenders

Non-bank lenders using AI for credit decisions, particularly those partnering with banks. The bank partner's examination obligations extend to the fintech's AI models.

INDIRECT — VIA BANK PARTNER

Insurance companies

Insurers using AI for underwriting and claims decisions face increasing state-level model risk management requirements modeled on SR 11-7 principles.

STATE-LEVEL ADOPTION
How AI Asset Assurance works

From evaluation to certification in three steps

AI Asset Assurance doesn't just find problems — it identifies root causes with Shapley attribution and prescribes exactly how to fix them.

STEP 01

Validate independently

Five structurally independent validators evaluate your AI model across accuracy, bias, robustness, and stability dimensions. Structural independence means each validator is legally separate, economically independent, and running on distinct infrastructure — exceeding SR 11-7's independence standard.

STEP 02

Decompose outcomes

Shapley attribution decomposes model outcomes to individual input variables. For a credit scoring model, this means identifying exactly which features drive disparate outcomes across demographic groups — the level of analysis examiners increasingly expect for AI/ML models.

STEP 03

Document for examination

Receive examination-ready documentation including model characterization, validation methodology, outcome analysis, limitation disclosure, and ongoing monitoring framework. Designed to satisfy MRM examination standards without rework.

AI Asset Assurance vs alternatives

The SR 11-7 compliance landscape is crowded. Here's where we fit.

Multiple platforms address model risk management. We're honest about what they do well — and precise about what only AI Asset Assurance provides.

CapabilityValidMind / ExperianCredo AIBig 4 firmInternal MRM teamAI Asset Assurance
Model documentation✓ Automated✓ Policy packs✓ Custom✓ Manual✓ Standardized
Drift detection✓ Batch inference✓ DashboardPoint-in-time✓ If built✓ Behavioral + sealed
Examination-ready output✓ Strong✓ Custom format
Bias detection✓ Fairness tests✓ Aggregate scoresLimited✓ If implemented✓ Cohort analysis
Explainability✓ Feature importance✓ ScorecardsNarrative✓ SHAP/LIME✓ Shapley per-variable
Structural independenceSingle vendor platformSingle vendor platformSingle firmSame organization✓ 5 independent validators, BFT consensus
Deployment integrity verification✓ 6-layer sealed deployment
Prescriptive remediationFindings log — human writes fixGeneric strategy suggestionsNarrative recommendationsTeam determines fix✓ Per-variable fix + projected improvement
Per-request verification✓ Real-Time tier
Being honest about the competition

ValidMind (backed by Point72 Ventures, partnered with Experian) is the strongest platform for model risk management workflow — documentation automation, validation tracking, and governance. If your primary need is streamlining MRM operations and generating examination-ready documentation, they're a strong choice.

Credo AI ($39M raised, Gartner/Forrester recognized) covers model-level, agent-level, and application-level governance with pre-built policy packs. Their continuous monitoring and compliance documentation are well-regarded.

Where AI Asset Assurance is different — stated precisely: Every platform above can tell you something is wrong with your model. AI Asset Assurance tells you which specific variable caused it, through what mechanism (direct effect vs proxy correlation), what to change (remove, reweight, or replace), and what the projected improvement would be (e.g., "removing variable X improves your fairness score from 0.72 to 0.89 with a 2% accuracy tradeoff recoverable by adding variable Y"). That diagnosis-to-prescription pipeline is automated in the evaluation output — not a separate engagement, not a human interpretation step.

Additionally, no platform above verifies that the model in production is the same model that was validated — they monitor performance after the fact. And no platform provides structurally independent evaluation using multiple validators from separate organizations. ValidMind is the governance operating system. AI Asset Assurance is the independent auditor. Most institutions will benefit from both.

Real-Time tier

Per-request verification for every lending decision, every trade, every credit score

SR 11-7 requires ongoing monitoring. AI Asset Assurance Real-Time makes every transaction independently verified — not just the model, but the entire deployment stack.

Standard evaluation
Quarterly re-evaluation

Full evaluation every 90 days. Examination-ready documentation. Meets the baseline SR 11-7 requirement for independent model validation. Between evaluations, you rely on internal monitoring.

Real-Time tier
Per-request verification

Every lending decision, credit score, and fraud detection output is verified against the certified deployment. If the model, prompt, or configuration changes, the certification is invalidated before the next transaction processes. The OCC examiner doesn't ask "when was your last validation?" — you show them continuous verification receipts.

The examination question that changes everything: "When was your model last independently validated?" With quarterly evaluation: "January." With Real-Time: "47 seconds ago — and here are 1.2 million verification receipts from the last 30 days." That's a fundamentally different examination posture.

Real exposure

What financial institutions face without independent AI model validation

Each scenario reflects documented patterns from enforcement actions, examination findings, and industry incidents — applied to AI models that fall under SR 11-7 and OCC Bulletin 2011-12.

Why SR 11-7 exposure is different: The OCC has incorporated AI findings in enforcement actions in 17 matters since fiscal year 2020. Deviation from the Model Risk Management Guidance is grounds for a Matter Requiring Attention — an examiner mandate that operates as an informal enforcement action and triggers board-level review. MRAs can lead to ratings downgrades, consent orders, and civil money penalties. The guidance requires independent model validation, ongoing monitoring, and outcomes analysis — and the OCC has made clear these requirements apply to AI/ML models.

AI credit scoring model silently drifts after vendor update
Credit MRA + consent order

A regional bank uses a third-party AI credit scoring model for consumer lending. The vendor pushes a model update that changes feature weightings — reducing emphasis on payment history and increasing weight on geographic and employment stability signals. The bank's validation team approved the original model, but the vendor update wasn't subjected to independent re-validation. Over six months, denial rates for applicants in majority-minority ZIP codes increase by 18%. The shift only surfaces during the next OCC examination when examiners review fair lending data.

WITHOUT INDEPENDENT VALIDATION

The bank treated the vendor update as a "minor enhancement" and didn't re-validate. The OCC finds insufficient model risk management for a material model. MRA issued. The bank must demonstrate the model was performing as intended throughout the period — but they have no independent validation data for those six months. The examination escalates to a consent order with fair lending remediation requirements.

WITH AI Asset Assurance VALIDATION

Real-Time deployment integrity verification detects the model change immediately — the behavioral fingerprint shifts when the vendor pushes the update. Axiom alerts the bank before the updated model processes a single loan decision. Shapley attribution identifies the new geographic weighting as a potential fair lending risk. The bank pauses the model, validates the update, and addresses the issue before any consumers are affected.

The OCC's Comptroller's Handbook on MRM explicitly states that vendor model changes require the same validation rigor as new models. Most banks don't treat them that way.

Fraud detection AI produces 340% increase in false positives for immigrant customers
BSA/AML MRA + CMP risk

A mid-size bank deploys AI-based transaction monitoring for BSA/AML compliance. The model flags suspicious activity based on transaction patterns, frequency, and counterparty analysis. But the training data overrepresents domestic transaction patterns. International remittances — common among immigrant customers — trigger false positives at 340% the rate of domestic transactions. The bank's compliance team is overwhelmed with false alerts, and legitimate suspicious activity gets buried in the noise. Simultaneously, thousands of customers have accounts frozen or transactions delayed.

WITHOUT INDEPENDENT VALIDATION

The bank can't explain why false positive rates differ so dramatically across customer segments. Examiners find the outcomes analysis required by SR 11-7 was never performed on disaggregated population data. The MRA cites insufficient ongoing monitoring and inadequate outcomes analysis. FinCEN penalties for BSA/AML deficiencies can reach $1M per day. The bank must also remediate affected customers — account unfreezing, transaction reprocessing, and potential UDAP exposure for unfair treatment.

WITH AI Asset Assurance VALIDATION

Cohort analysis during initial evaluation reveals the 340% false positive disparity across customer segments. Shapley attribution identifies "international transfer frequency" and "counterparty country risk score" as contributing 58% of the disparity. The bank adjusts the model to normalize for legitimate remittance patterns before deploying to production. Quarterly re-evaluation monitors for drift. Examination-ready documentation shows the disparity was identified, diagnosed, and resolved.

The vast majority of AML alerts are false positives — industry estimates range from 90% to 95%. AI is supposed to reduce this, but poorly validated AI can make it worse while introducing discriminatory patterns.

AI mortgage underwriting model uses rental payment history as a proxy for race
Lending Fair lending + CRA

A bank adopts an AI underwriting model that incorporates "alternative data" including rental payment history, utility payments, and bank account management patterns. The model is marketed as expanding credit access to thin-file borrowers. In practice, the rental payment data reflects landlord reporting patterns — landlords in majority-minority neighborhoods report late payments at higher rates, not because tenants pay later, but because those landlords use property management software that reports more aggressively. The model treats this reporting asymmetry as a credit risk signal. Black and Hispanic applicants with identical payment behavior are scored lower.

WITHOUT INDEPENDENT VALIDATION

The bank's internal validation focused on predictive accuracy — the model performs well on back-testing. But the fair lending analysis was performed on aggregate data, not disaggregated by race. The OCC and CFPB jointly examine the bank. HMDA data reveals the disparity. The bank faces dual exposure: OCC MRA for model risk management deficiencies and CFPB enforcement for fair lending violations. The CFPB fined Apple $25M and Goldman Sachs $45M for Apple Card failures in 2024 — this is the same exposure category.

WITH AI Asset Assurance VALIDATION

Shapley attribution identifies "rental payment reporting frequency" as contributing 29% of the racial disparity in underwriting scores — despite the variable appearing race-neutral. The evaluation report shows this is a proxy: the disparity is driven by differential landlord reporting practices, not differential borrower behavior. The bank adjusts the model to normalize rental payment data by reporting source type, eliminating the proxy effect while retaining the credit-predictive value of rental history.

Comparable: Apple Card / Goldman Sachs — CFPB fined $70M combined in October 2024. The investigation started with a viral tweet about gender disparity in credit limits and escalated to an 18-month regulatory examination.

AI trading model's risk parameters shift during market volatility
Trading Safety & soundness

A bank's proprietary trading desk uses AI models for automated market-making and risk management. During a period of elevated volatility, the adaptive ML model adjusts its risk parameters — widening position limits and increasing concentration in correlated assets that the model's historical training identifies as "mean-reverting." The model was validated under normal market conditions. The adaptive behavior during stress conditions was never independently tested. The model is technically performing as designed — adapting to market conditions — but the adapted parameters exceed the risk appetite the board approved.

WITHOUT INDEPENDENT VALIDATION

The adaptive parameter shift isn't caught until the position generates a $40M loss in a single week. Post-mortem reveals the model's stress-condition behavior was never validated — only its steady-state performance. The OCC examination finds the model inventory didn't document the adaptive behavior, the validation didn't cover tail-risk scenarios, and ongoing monitoring failed to detect the parameter drift. MRA for model risk, plus potential safety and soundness action for inadequate risk controls.

WITH AI Asset Assurance VALIDATION

Real-Time deployment integrity verification detects the parameter shift the moment the model adapts. The behavioral fingerprint of the model changes — even though the model binary is the same, its operating configuration has shifted. Axiom alerts risk management that the model's current parameters exceed the validated envelope. The trading desk can evaluate the adaptation and make a deliberate decision — accept the risk, override the adaptation, or pause automated trading — before the position compounds.

Adaptive ML models in trading are the hardest SR 11-7 compliance challenge because the model's behavior changes by design. The question is whether those changes are monitored and bounded.

AI customer chatbot provides inaccurate loan terms and commitments
Consumer UDAP + reputational

A bank deploys a GenAI-powered customer service chatbot that answers questions about mortgage products, loan rates, and account features. The LLM was fine-tuned on the bank's product documentation, but the RAG index hasn't been updated since a rate change three weeks ago. The chatbot confidently quotes outdated rates and terms to hundreds of customers. Some customers make financial decisions — locking in competing rates, making down payments — based on the chatbot's representations. When the actual terms differ, the bank faces complaints and potential UDAP exposure for deceptive practices.

WITHOUT INDEPENDENT VALIDATION

The bank didn't treat the chatbot as a "model" under SR 11-7 because it's customer service, not a decisioning model. But the OCC examiner notes that the chatbot's outputs materially influence consumer decisions — that makes it a model under the guidance. No model inventory entry, no validation, no ongoing monitoring. MRA for model risk management, plus UDAP referral to CFPB for deceptive representations. Customer remediation costs for honoring the quoted rates or compensating affected customers.

WITH AI Asset Assurance VALIDATION

Deployment integrity verification seals the chatbot's full stack — including the RAG index. When the rate change occurs but the RAG index isn't updated, the sealed deployment is no longer consistent with the current product terms. Axiom flags the mismatch before the chatbot serves a single outdated response. The bank updates the RAG index, AI Asset Assurance re-verifies the deployment, and the chatbot resumes with accurate information. Zero customers affected.

SEC examinations in 2023 found that most investment advisers did not have comprehensive policies governing AI use — and several had misrepresented their use of AI to clients. The same gap exists in banking.

17
OCC enforcement matters
citing AI since FY2020
$70M
Apple Card / Goldman fines
for algorithmic failures (2024)
$1M/day
FinCEN penalty ceiling
for BSA/AML deficiencies

SR 11-7 requires three things: independent validation, ongoing monitoring, and outcomes analysis. AI Asset Assurance delivers all three — with Shapley causal attribution that no internal validation team or Big 4 firm provides. The question isn't whether examiners will ask about your AI models. It's whether you'll have answers when they do.

Your next examination will ask about AI model validation.

Structural independence. Causal attribution. Examination-ready documentation. Get ahead of the examiner's questions.

Request evaluation