April 30, 2026

What SR 26-2 Got Right About Model Risk – And the Critical Temporary Gap It Left for AI Governance Teams in Finance

A Lumenova AI featured image graphic on a dark textured background with stacked glowing horizontal rectangular elements in abstract orange and gold, creating structure on the right. Large text in the top left states 'SR 26-2' and 'A New Era of Model Risk Management for Financial Institutions' below it. The Lumenova AI logo is in the top right corner.

In April, three federal banking regulators rewrote the framework that has governed model risk in U.S. banks for fifteen years – and in doing so, they did something unusual. They admitted, in writing, that they don’t yet know how to govern the AI that most banks are actually deploying.

→ Why This Matters?

Here lies the core reality of the new landscape. By carving out GenAI and agentic AI, SR 26-2 didn’t shrink the governance problem; it expanded it. The carveout pushes responsibility onto enterprise risk frameworks that don’t exist yet at most banks and that’s the gap MRM leaders need to close before the RFI lands, and that’s the gap Risk leaders in Financial organizations must collaboratively close and, subsequently, propose to the regulators in the event of future requests for information

This article dissects the revised 2026 guidance, offering comprehensive, actionable steps for banking organizations striving to align their Model Risk Management (MRM) and general Risk Management practices with the new regulatory expectations.

Key Changes from the April 2026 Joint Guidance 

On April 17, 2026, the landscape of Model Risk Management experienced a historic update. The Board of Governors of the Federal Reserve System, the Office of the Comptroller of the Currency (OCC), and the Federal Deposit Insurance Corporation (FDIC) jointly issued SR 26-2: Revised Guidance on Model Risk Management.

This landmark directive supersedes and replaces two critical pieces of legacy guidance: 

  • SR letter 11-7, Guidance on Model Risk Management (issued April 4, 2011)
  • SR letter 21-8, Interagency Statement on Model Risk Management for Bank Systems Supporting Bank Secrecy Act/Anti-Money Laundering Compliance (issued April 9, 2021). 

By consolidating and modernizing these frameworks, regulators have established a tailored, risk-based approach designed to address the complexities of modern banking technologies.

At its core, SR 26-2 defines model risk as the potential for adverse financial consequences arising from decisions made on the basis of model output, a definition that intentionally covers traditional statistical and quantitative models and non-generative, non-agentic AI, while explicitly carving out GenAI and agentic systems. However, the 2026 framework significantly refines how financial organizations must identify, scope, and manage this risk, moving away from a blanket approach to a highly targeted, resource-efficient methodology.

At its core, SR 26-2 defines “model risk” as the potential for adverse financial consequences from model-based decisions. Model risk should be assessed as a balance of inherent risk, exposure, and purpose. The guidance specifically moves GenAI and agentic systems out of scope for MRM models and intentionally leaves traditional statistical and quantitative models in scope. The 2026 framework significantly refines how financial organizations must identify, scope, and manage this risk, moving away from a blanket approach to a highly targeted, resource-efficient methodology. 

SR 26-2: Scope, Core Definitions and Obligations

The updated guidance focuses most heavily on banking organizations with over $30 billion in assets, or those exhibiting high-risk profiles, regardless of their total size

While the guidance clarifies that it does not constitute a set of enforceable laws, it strictly outlines the “sound principles” regulators expect to see operating within a mature, enterprise-grade risk management environment. Grounded in foundational statutory authorities (including 12 CFR Part 302, App A (FDIC); 12 CFR Part 4, Subpart F, App A (OCC); 12 CFR Part 262, App A (Federal Reserve); and Section 3(q) of the FDI Act), SR 26-2 demands a proactive, structural evolution from risk leaders.

Notable Shifts

SR 26-2 introduces several profound shifts in how Model Risk Management is executed day-to-day, streamlining operations while intensifying scrutiny on complex intersections of technology.

Materiality: A Combination of Model Exposure & Model Purpose

The revised guidance emphasizes that model risk is a direct function of a model’s inherent risk (complexity and assumptions) and its materiality (exposure and purpose)

Materiality is by no means a static label; it is explicitly defined as the combination of model exposure (the quantitative financial or operational impact) and model purpose (the qualitative, strategic, or regulatory importance of the model).

For enterprise risk teams, this bifurcated definition of materiality means that a model with a relatively low financial footprint but a high regulatory purpose (such as a compliance screening tool) may still trigger rigorous validation requirements. Conversely, high-exposure models that are purely operational might be tiered differently depending on the organization’s risk appetite.

→ Takeaway: MRM groups must urgently refine their internal risk-tiering matrices to align with this two-pronged definition. Organizations should develop standardized assessment frameworks to independently score “Exposure” (e.g., portfolio size, capital at risk) and “Purpose” (e.g., regulatory reporting vs. internal operational efficiency). This dual-scoring system will separate “immaterial” models, which require only continuous monitoring, from “higher materiality” models demanding stringent oversight. Quantifying “Exposure” in particular will require new, documented assessment guidelines to ensure consistency across business units.

Aggregate Risk Focus

Regulators are moving away from viewing models in isolation. The new guidance mandates a focus on Aggregate Risk, requiring institutions to evaluate dependencies and common assumptions across multiple models simultaneously. This is particularly critical when implementing interconnected systems where an error in one model cascades through others.

Let’s take a concrete example. Consider a bank where the credit risk PD model, the IFRS 9 ECL model, and the capital stress test all draw from the same macroeconomic scenario generator. Under SR 11-7, each was validated independently. Under SR 26-2, the agencies expect the bank to ask: what happens when the scenario generator drifts, and three downstream models drift in correlated ways? 

Inventory Cleanup

In a pragmatic move, regulators are explicitly excluding simple arithmetic spreadsheets and deterministic, rule-based software from the formal MRM inventory. This acknowledges that applying advanced statistical validation to basic calculators is a misallocation of resources.

Separation of Novel AI (GenAI and AI Agents)

Perhaps the most significant shift is the explicit exclusion of generative AI (GenAI) and agentic AI from the formal MRM scope, labeling these technologies as “novel and rapidly evolving”. The responsibility for these technologies is placed with other risk areas within the organization.

→ Takeaway: Conduct an immediate inventory cleanup. Remove simple arithmetic spreadsheets and deterministic rule-based software from the formal MRM lifecycle. However, do not leave them unmanaged; reclassify them as “Critical Business Tools” subject to general IT, enterprise, or financial risk controls rather than full MRM validation. Organizations under the $30B threshold should review their portfolios; they may experience reduced regulatory pressure unless their model use is highly complex, high-risk in aggregate, or outside traditional banking, which will require amended policies to document these exceptions.

The updated definition of what is in scope for MRM creates a gap, as we have discussed. However, it also creates an opportunity for MRM groups to more efficiently train and focus their skilled resources on the core of what they have traditionally assessed. This will enable them to accelerate approval of critical needs.

Many banks have had issues with who should assess critical customer-facing Gen AI or agentic solutions, thinking MRM must take this on, while other Risk groups performed the oversight for non-customer-facing AI solutions. This approach stretched scarce AI skills across organizations and created bottlenecks.

Core Obligations

Despite the exclusions, the core obligations for in-scope traditional models remain rigorous. Validation must comprehensively evaluate conceptual soundness, conduct thorough outcomes analysis, and ensure ongoing monitoring.

Furthermore, the concept of “Effective Challenge” has been significantly strengthened. The guidance emphasizes that independent reviewers must possess the “organizational standing” and resources necessary to effect actual change, rather than merely offering technical critiques. This means MRM leaders need a seat at the executive table and the authority to halt model deployment if risks are unmitigated. Finally, the guidance acknowledges that urgent business needs may occasionally require model usage before validation is fully complete; however, this is only permissible under strict, heavily documented temporary controls.

→ Takeaway: MRM groups must adjust their validation rigor based on the new materiality matrix. Teams may need to “de-scope” intensive, granular testing for low-impact models to focus premium quantitative resources on critical, high-materiality engines. Assess your organizational structure to ensure the MRM function has the standing and skills required to enforce “Effective Challenge”; this may require reporting structure modifications and new independent assessment methods.

Main Insights

To achieve compliance and optimize risk operations, institutions must translate these regulatory principles into technical and procedural realities. Below are the key insights and strategic pillars required to align with SR 26-2.

Model Classification and the GenAI Gap

Under the 2026 guidance, the binary classification of “Model vs. Non-Model” is obsolete. Banks must transition to a three-tiered assessment framework:

  1. Traditional Models: Systems applying statistical, economic, or quantitative theories to process data into estimates. These remain fully in scope and require the full rigor of effective challenge.
  2. Non-Model Tools: Simple arithmetic, spreadsheets, and deterministic rules. These are excluded from MRM but require general governance.
  3. Excluded Innovations: GenAI and agentic AI are currently excluded from MRM governance. These require general Risk governance. 

This creates what industry experts call the “GenAI Gap”. While the guidance explicitly places GenAI out of scope, it sternly warns that an institution’s general risk management and governance practices should still guide these tools. Model risk management teams cannot simply ignore Generative AI and agentic solutions. They must work collaboratively with general Risk teams to ensure policies, guidelines, and processes are in place to govern the risk of all models.

Crucially, the OCC’s press release announcing SR 26-2 contained a definitive signal: the agencies explicitly stated they “plan to issue in the near future a request for information that addresses Model Risk Management generally and considers, in particular, banks’ use of AI, including generative AI, agentic AI, and AI-based models. 

→ Takeaway: This upcoming RFI signifies that regulators are actively mapping the territory to build formal enforcement mechanisms. For Risk and MRM teams, this creates a critical 12 to 18-month window to proactively build out enterprise AI governance frameworks and close the GenAI gap before formal rules are codified and supervised exams catch up. 

MRM functions must proactively collaborate and coordinate with enterprise risk, cybersecurity, and data governance groups. Together, they must establish a separate, parallel governance framework for GenAI that focuses heavily on agentic risks (such as autonomous decision-making, hallucination, and prompt injection) to prepare for the impending revised guidance that will close the GenAI gap. Remember that traditional ML (non-generative) remains in scope and subject to full MRM rigor today.

Hybrid Agentic AI Solutions Framework

The modern banking tech stack rarely uses models in isolation. Often, large language models or AI agents act as orchestrators, pulling data from or feeding inputs into traditional statistical models. The 2026 guidance provides a highly nuanced framework for these hybrid systems.

While Generative AI and agentic AI models are explicitly excluded from scope, the rules change when they interact with legacy or new systems. If an AI agent is used to access or utilize traditional statistical, quantitative, or non-generative AI models, those underlying models remain fully within the scope of the guidance.

→ Takeaway: Map out your institution’s AI architecture meticulously. Whenever a generative AI layer interfaces with an underlying credit-scoring or pricing model, the traditional model must still undergo full validation with the context of the aggregate integration. Furthermore, the organization’s broader governance practices must determine the appropriate controls for how the agentic layer interacts with the in-scope model, ensuring the agent cannot corrupt the inputs or misapply the outputs. The focus on aggregate risk also requires a systematic lens to assess the holistic risk of implementing agentic AI solutions that rely on these in-scope models.

AI Governance and Oversight Improvements

To operationalize the updated expectations, MRM teams must significantly upgrade their governance toolkits.

Effective Challenge Logs

Documentation must evolve beyond simple meeting minutes. Institutions need logs that specifically track the independence and influence of the challenger, explicitly showing where changes were mandated by MRM and successfully implemented by the model owners.

Internal Audit Integration

SR 26-2 marks a significant shift in the role of internal auditing. Audit teams should now act as evaluators of the holistic MRM process itself, rather than acting as a duplicator of the granular validation activities performed by the MRM group.

→ Takeaway: Implement an enterprise-wide “Challenge Log” system that records technical critiques and tracks them through to remediation. Engage with your internal audit leaders to redefine their testing scripts, moving them away from model code review and toward assessing the efficacy of the MRM governance structure and organizational standing.

Model Inventory Management Considerations

An institution’s model inventory can no longer be a static spreadsheet; it requires a comprehensive system with clear documentation to ensure continuity of operations.

  • Dynamic Inventory & Drift Alerts: Systems should automatically flag models for re-validation if market conditions or data relevance change significantly. Inventory systems must include automated triggers for re-evaluation and tracking. Ongoing monitoring must specifically track if a model is performing as expected relative to changes in market conditions or client activities. When performance deviates from established thresholds (drift), the system should trigger alerts for model adjustment, recalibration, or redevelopment.
  • Remediation Tracking: Systems must deeply support the tracking of recommendations, responses, exceptions, and remediation efforts to ensure operational continuity regardless of personnel turnover.
  • Tiered Management for Low-Level Risk: For models deemed immaterial due to low exposure or purpose, the system should focus on identification and monitoring, allowing organizations to re-evaluate these models only if their use becomes material in the future.

→ Takeaway: Upgrade MRM software platforms to support dynamic, automated lifecycles. Configure systems to explicitly identify models that apply statistical theories. Establish automated drift alerts that trigger exception workflows natively. Ensure your system segregates low-tier models into a monitoring-only status to save resources.

Vendor “Black Box” Models Handling

Banking institutions frequently rely on third-party vendor models. The guidance emphasizes that the lack of transparency in proprietary vendor products (the classic “black box” scenario) does not exempt a banking organization from its risk management responsibilities. Vendor models must be validated even if the underlying code is proprietary.

Organizations should strive to understand the vendor model’s design, development data, and relevant theories to validate conceptual soundness. When the underlying code is unavailable, validation teams must rely heavily on outcomes analysis and back-testing to assess if the model remains fit for purpose.

→ Takeaway for Vendor Management:

  1. Demand Vendor Transparency: Risk/MRM teams must push vendors for more developmental data or conduct more robust outcomes analysis to compensate for black box proprietary components. Determine how this impacts your organizational structure and assessment methods.
  2. Implement Overlays: Analysis of vendor performance should be used to support any necessary overlays or adjustments to the model’s output to mitigate risk if the tool does not perfectly align with the bank’s portfolio.
  3. Document Customizations: Any adjustments made to customize a vendor model for specific business needs must be deeply documented, justified, and evaluated as part of the formal validation process.

Broader Implications

The issuance of SR 26-2 represents a critical maturation in the field of Model Risk Management. By shifting focus toward a nuanced definition of materiality, aggregate system risk, and the organizational standing of the effective challenge, the Federal Reserve, OCC, and FDIC are urging banks to move away from compliance as a purely defensive, “check-the-box” exercise. Instead, regulators are demanding that MRM become a strategic, highly influential function that actively protects the institution’s balance sheet while enabling safe technological adoption.

For banks hovering around the $30 billion asset mark, or those pursuing highly complex, algorithm-driven business models, the operational transition will require immediate attention. The required investments in automated inventory systems capable of real-time drift alerts, specialized quantitative talent for rigorous vendor model back-testing, and the establishment of parallel AI governance frameworks for generative AI will be substantial. Cross-departmental collaboration – especially between MRM, Enterprise Risk, and IT – is no longer optional; it is a regulatory expectation.

However, this revised guidance also offers a pragmatic release valve for the industry. By explicitly excluding deterministic tools, simple spreadsheets, and novel agentic AI from this specific framework, regulators are giving MRM leaders the permission they need to optimize their portfolios. This allows teams to focus their finite, high-value quantitative resources on the complex statistical engines that actually drive systemic financial risk, rather than getting bogged down in low-impact administrative validations.

Ultimately, SR 26-2 is a clear signal: the era of static, siloed risk management is over. Financial institutions must cultivate dynamic, automated, and deeply integrated risk functions capable of challenging the business, continuously monitoring algorithmic drift, and adapting to the rapid convergence of traditional statistics and autonomous AI. Those that successfully modernize their frameworks will not only satisfy regulatory scrutiny but will unlock safer, more resilient avenues for algorithmic innovation, positioning themselves as leaders in the next generation of digital banking.

→ Ready to Convert MRM From a Deployment Bottleneck into a Strategic Backbone – And Close the Novel AI Gap Before Revised Guidance Dictates Remediation?

Navigating the 2026 SR 26-2 guidance requires more than just administrative policy updates; it demands intelligent, scalable, and automated governance workflows. Relying on legacy, manual processes is no longer an option if you want to scale model deployment with confidence. Ultimately, your Risk and MRM infrastructure will be the difference between a platform that helps accelerate innovation and one that stalls it.

Book a discovery call with Lumenova AI today and explore our purpose-built solutions for Banking and Finance Model Risk Management and AI governance.

Let our team of experts help you turn these new standards into a competitive advantage. From streamlining your dynamic model inventory and automating drift alerts to safely bridging the GenAI gap, Lumenova AI empowers your institution to remain resilient, agile, and ready for the future of algorithmic innovation.

Frequently Asked Questions

The revised regulatory framework shifts to a tailored supervisory approach, focusing most heavily on banking organizations with over $30 billion in assets, as well as institutions with high-risk profiles, regardless of their total size.

This variable is no longer based solely on financial impact. SR 26-2 formally introduces materiality as a combination of model exposure (the quantitative financial impact) and model purpose (the qualitative, strategic, or regulatory importance).

No. Generative AI and agentic AI are explicitly excluded from the formal MRM scope under this specific guidance, as regulators currently categorize them as “novel and rapidly evolving”. However, institutions are still strictly expected to manage these technologies using their broader enterprise risk and governance practices.

In a hybrid architecture, if an AI agent is used to access, query, or utilize a traditional statistical or non-generative AI model, that underlying traditional model remains fully within the scope of the guidance and must undergo comprehensive MRM validation; however, with the full context of the integration.

No. Simple arithmetic spreadsheets and deterministic, rule-based software are explicitly excluded from the formal definition of a model. They should be removed from your MRM inventory and instead tracked as “Critical Business Tools” subject to standard IT and operational risk controls.

The lack of transparency in vendor products does not exempt a banking organization from its risk management responsibilities. When the underlying code is unavailable, validation teams must rely heavily on outcomes analysis and back-testing to assess if the model remains fit for purpose, applying documented overlays to outputs where necessary.


Related topics: AI MonitoringAI SafetyBanking & Investment

Make your AI ethical, transparent, and compliant - with Lumenova AI

Book your demo