SR 11-7 evidence for AI and model risk.
For banks and capital markets teams, TrustEvals turns AI behavior into evidence model-risk teams can use for inventory, validation, monitoring, and governance.
SR 11-7 is US banking supervisory guidance for model risk management. It is not an AI law and it is not an AI standard. For banking organizations, AI and machine-learning systems can fall into model risk scope when they produce quantitative estimates, classifications, decisions, or decision support that affects business outcomes.
US banking supervisory guidance
Federal Reserve and OCC
Supervisory guidance for model risk management in banking organizations.
SR 11-7 AI model risk
The compliance claim needs source evidence.
SR 11-7 does not give an AI checklist. It gives model-risk expectations. The practical task is translating AI behavior into inventory, validation, monitoring, change control, and governance evidence.
Requirement. Maintain a complete inventory with ownership, purpose, use, limitations, and materiality.
Evidence. AI system inventory, model or agent owner, business use, materiality flag, dependency map, and approval state.
Requirement. Document design, data, assumptions, limitations, and implementation controls.
Evidence. Use-case baseline, data source record, prompt or model version, control design, test set, and implementation signoff.
Requirement. Independently assess conceptual soundness, outcomes, ongoing performance, and limitations.
Evidence. Validation packet, benchmark results, exception analysis, challenger review notes, weakness log, and remediation status.
Requirement. Track performance over time, monitor changes, escalate issues, and report model risk to governance forums.
Evidence. Drift report, control-health time series, incident trace, change log, stale-evidence flag, and committee-ready risk summary.
What finance teams should remember.
AI does not escape model risk because it looks like software.
If an AI system influences a banking decision, estimate, classification, or control outcome, model-risk teams need a defensible inventory and monitoring position.
Validation needs behavioral evidence.
Static documentation is not enough for systems whose outputs change with prompts, data, tools, vendors, or model versions. Validation needs observed behavior over time.
Governance forums need a summary, not raw traces.
TrustEvals keeps raw trace evidence available while producing risk summaries that model-risk committees, internal audit, and business owners can actually review.
SR 11-7 AI, asked plainly.
No. SR 11-7 is supervisory guidance for model risk management in banking. AI systems can fall under model-risk scope depending on how they are used.
Systems that produce estimates, classifications, recommendations, decisions, or decision support for material banking activity are the highest-priority candidates for model-risk review.
TrustEvals produces behavioral evidence: baselines, eval results, drift reports, change history, incident traces, and exception analysis that validation teams can review.
SR 11-7 is banking model-risk guidance. NIST AI RMF is a voluntary AI risk framework. Finance teams can use NIST vocabulary while preserving SR 11-7 model-risk evidence requirements.
Book the AI Audit.
Thirty minutes to scope your AI footprint, access path, Shadow AI exposure, and board-ready read.