Tag: model risk management

Model inventories are rapidly expanding. AI-enabled tools are entering workflows that were once deterministic and decisioning environments are more interconnected than ever. At the same time, regulatory scrutiny around model risk management continues to intensify. In many institutions, classification determines validation depth, monitoring intensity, and escalation pathways while informing board reporting. If classification is wrong, every downstream control is misaligned. And, in 2026, model classification is no longer just about assigning a tier, but rather about understanding data lineage, use case evolution, interdependencies, and governance accountability in a decentralized, AI-driven environment. We recently spoke with Mark Longman, Director of Analytics and Regulatory Technology, and here are some of his thoughts around five blind spots risk and compliance leaders should consider addressing now. 1. The “Set It and Forget It” Mentality The Blind Spot Model classification frameworks are often designed during a regulatory remediation effort or inventory modernization initiative. Once documented and approved, they can remain largely unchanged for years. However, model risk management is an ongoing process. “There’s really no sort of one and done when it comes to model risk management,” said Longman. Why It Matters Classification is not merely descriptive, it’s prescriptive. It drives the depth of validation, the frequency of monitoring, the intensity of governance oversight and the level of senior management visibility. As Longman notes, data fragmentation is compounding the challenge. “There’s data everywhere – internal, cloud, even shadow IT – and it’s tough to get a clear view into the inputs into the models,” he said. When inputs are unclear, tiering becomes inherently subjective and if classification frameworks are not reviewed regularly, governance intensity can become misaligned with real exposure. Therefore, static classification is a growing risk, especially in a world of rapidly expanding AI use cases. In a supervisory environment that continues to scrutinize model definitions, particularly as AI tools proliferate, a dynamic, periodically refreshed classification process can demonstrate institutional vigilance. 2. Assuming Third-Party Models Reduce Governance Accountability The Blind SpotThere is often an implicit belief that vendor-provided models carry less governance burden because they were developed externally. Why It Matters Vendor provided models continue to grow, particularly in AI-driven solutions, but supervisory expectations remain firm. “Third-party models do not diminish the responsibility of the institution for its governance and oversight of the model – whether it’s monitoring, ongoing validation, just evaluating drift model documentation,” Longman said. “The board and senior managers are responsible to make sure that these models are performing as expected and that includes third-party models.” Regulators consistently emphasize that institutions remain responsible for the outcomes produced by models used in their decisioning environments, regardless of origin. If a vendor model influences credit approvals, pricing, fraud decisions, or capital calculations, it directly affects customers, financial performance and compliance exposure. Treating third-party models as inherently lower risk can also distort internal tiering frameworks. When vendor models are under-classified, validation depth and monitoring rigor may be insufficient relative to their true impact. 3. Limited Situational Awareness of Model Interdependencies The Blind Spotfeed multiple downstream models simultaneously. Why It Matters Risk often flows across interdependencies. When upstream models degrade in performance or introduce bias, downstream models inherit that exposure. If multiple material decisions depend on the same data transformation or feature engineering process, concentration risk emerges. Without visibility into these dependencies, tiering assessments may underestimate cumulative risk, and monitoring frameworks may fail to detect systemic vulnerabilities. “There has to be a holistic view of what models are being used for – and really somebody to ensure there’s not that overlap across models,” Longman said. Supervisors are increasingly interested in understanding how model risk propagates through business processes. When institutions cannot articulate how models interact, it raises broader concerns about situational awareness and control effectiveness. Therefore, capturing interdependencies within the classification framework enhances more than documentation. It enables more accurate tiering, more targeted monitoring and more informed governance oversight. 4. Excluding Models Without Defensible Rationale The Blind SpotGray-area tools frequently sit outside formal inventories: rule-based engines, spreadsheet models, scenario calculators, heuristic decision aids, or emerging AI tools used for analysis and summarization. These tools may not neatly fit legacy definitions of a “model,” and so they are sometimes excluded without robust documentation. Why It Matters Regulatory definitions of “model” have broadened over time. What creates risk is the absence of defensible reasoning and documentation. Longman describes the risk clearly: “Some [teams] are deploying AI solutions that are sort of unbeknownst to the model risk management community – and almost creating what you might think of as a shadow model inventory.” Without visibility, institutions cannot confidently characterize use, trace inputs, or assign appropriate tiers, according to Longman. It also undermines the credibility of the official inventory during examinations. A well-governed program can articulate why certain tools fall outside model risk management scope, referencing documented criteria aligned with regulatory guidance. Without that evidence, exclusions can appear arbitrary, suggesting gaps in oversight. 5. Inconsistent or Subjective Classification Frameworks The Blind SpotAs inventories scale and governance teams expand, classification decisions are often distributed across reviewers. Over time, discrepancies can emerge. Why It Matters Inconsistency undermines both risk management and regulatory confidence. If two models with comparable use cases and impact profiles are assigned different tiers without clear justification, it signals that the framework is not being applied uniformly. AI adds even more complexity. When it comes to emerging AI model governance versus traditional model governance, there’s a lot to unpack, says Longman: “The AI models themselves are a lot more complicated than your traditional logistic or multiple regression models. The data, the prompting, you need to monitor the prompts that the LLMs for example are responding to and you need to make sure you can have what you may think of as prompt drift,” Longman said. As frameworks evolve, particularly to incorporate AI, automation, and new regulatory interpretations, institutions must ensure that changes are cascaded across the entire inventory. Partial updates or selective reclassification introduce fragmentation. Longman recommends formalizing classification through a structured decision tree embedded in policy to ensure consistent outcomes across business units. Beyond clear documentation, a strong classification program is applied consistently, measured objectively, and periodically reassessed across the full portfolio. BONUS – 6. Elevating Classification with Data-Level Visibility Some institutions are extending classification discipline beyond models to the data layer itself. Longman describes organizations that maintain not only a model inventory, but a data inventory, mapping variables to the models they influence. This approach allows institutions to quickly assess downstream effects when operational or environmental changes occur including system updates or even natural disasters affecting payment behavior. In an AI-driven environment, traceability may become a competitive differentiator. Conclusion Model classification is foundational. It determines how risk is measured, monitored, escalated, and reported. In a rapidly evolving regulatory and technological environment, it cannot remain static. Institutions that invest now in transparency, consistency, and data-level visibility will not only reduce supervisory friction – they will build a governance framework capable of supporting the next generation of AI-enabled decisioning. Learn more

With increasing regulatory complexities, compliance with model risk management requirements is crucial for operational resilience.

Model governance is growing increasingly important as more companies implement machine learning model deployment and AI analytics solutions into their decision-making processes. Models are used by institutions to influence business decisions and identify risks based on data analysis and forecasting. While models do increase business efficiency, they also bring their own set of unique risks. Robust model governance can help mitigate these concerns, while still maintaining efficiency and a competitive edge. What is model governance? Model governance refers to the framework your organization has in place for overseeing how you manage your development, model deployment, validation and usage.1 This can involve policies like who has access to your models, how they are tested, how new versions are rolled out or how they are monitored for accuracy and bias.2 Because models analyze data and hypotheses to make predictions, there's inherent uncertainty in their forecasts.3 This uncertainty can sometimes make them vulnerable to errors, which makes robust governance so important. Machine learning model governance in banks, for example, might include internal controls, audits, a thorough inventory of models, proper documentation, oversight and ensuring transparent policies and procedures. One significant part of model governance is ensuring your business complies with federal regulations. The Federal Reserve Board and the Office of the Comptroller of the Currency (OCC) have published guidance protocols for how models are developed, implemented and used. Financial institutions that utilize models must ensure their internal policies are consistent with these regulations. The OCC requirements for financial institutions include: Model validations at least once a year Critical review by an independent party Proper model documentation Risk assessment of models' conceptual soundness, intended performance and comparisons to actual outcomes Vigorous validation procedures that mitigate risk Why is model governance important — especially now? More and more organizations are implementing AI, machine learning and analytics into their models. This means that in order to keep up with the competition's efficiency and accuracy, your business may need complex models as well. But as these models become more sophisticated, so does the need for robust governance.3 Undetected model errors can lead to financial loss, reputation damage and a host of other serious issues. These errors can be introduced at any point from design to implementation or even after deployment via inappropriate usage of the model, drift or other issues. With model governance, your organization can understand the intricacies of all the variables that can affect your models' results, controlling production closely with even greater efficiency and accuracy. Some common issues that model governance monitors for include:2 Testing for drift to ensure that accuracy is maintained over time. Ensuring models maintain accuracy if deployed in new locations or new demographics. Providing systems to continuously audit models for speed and accuracy. Identifying biases that may unintentionally creep into the model as it analyzes and learns from data. Ensuring transparency that meets federal regulations, rather than operating within a black box. Good model governance includes documentation that explains data sources and how decisions are reached. Model governance use cases Below are just three examples of use cases for model governance that can aid in advanced analytics solutions. Credit scoring A credit risk score can be used to help banks determine the risks of loans (and whether certain loans are approved at all). Governance can catch biases early, such as unintentionally only accepting lower credit scores from certain demographics. Audits can also catch biases for the bank that might result in a qualified applicant not getting a loan they should. Interest rate risk Governance can catch if a model is making interest rate errors, such as determining that a high-risk account is actually low-risk or vice versa. Sometimes changing market conditions, like a pandemic or recession, can unintentionally introduce errors into interest rate data analysis that governance will catch. Security challenges One department in a company might be utilizing a model specifically for their demographic to increase revenue, but if another department used the same model, they might be violating regulatory compliance.4 Governance can monitor model security and usage, ensuring compliance is maintained. Why Experian? Experian® provides risk mitigation tools and objective and comprehensive model risk management expertise that can help your company implement custom models, achieve robust governance and comply with any relevant federal regulations. In addition, Experian can provide customized modeling services that provide unique analytical insights to ensure your models are tailored to your specific needs. Experian's model risk governance services utilize business consultants with tenured experience who can provide expert independent, third-party reviews of your model risk management practices. Key services include: Back-testing and benchmarking: Experian validates performance and accuracy, including utilizing statistical metrics that compare your model's performance to previous years and industry benchmarks. Sensitivity analysis: While all models have some degree of uncertainty, Experian helps ensure your models still fall within the expected ranges of stability. Stress testing: Experian's experts will perform a series of characteristic-level stress tests to determine sensitivity to small changes and extreme changes. Gap analysis and action plan: Experts will provide a comprehensive gap analysis report with best-practice recommendations, including identifying discrepancies with regulatory requirements. Traditionally, model governance can be time-consuming and challenging, with numerous internal hurdles to overcome. Utilizing Experian's business intelligence and analytics solutions, alongside its model risk management expertise, allows clients to seamlessly pass requirements and experience accelerated implementation and deployment. Experian can optimize your model governance Experian is committed to helping you optimize your model governance and risk management. Learn more here. References 1Model Governance," Open Risk Manual, accessed September 29, 2023. https://www.openriskmanual.org/wiki/Model_Governance2Lorica, Ben, Doddi, Harish, and Talby, David. "What Are Model Governance and Model Operations?" O'Reilly, June 19, 2019. https://www.oreilly.com/radar/what-are-model-governance-and-model-operations/3"Comptroller's Handbook: Model Risk Management," Office of the Comptroller of the Currency. August 2021. https://www.occ.treas.gov/publications-and-resources/publications/comptrollers-handbook/files/model-risk-management/pub-ch-model-risk.pdf4Doddi, Harish. "What is AI Model Governance?" Forbes. August 2, 2021. https://www.forbes.com/sites/forbestechcouncil/2021/08/02/what-is-ai-model-governance/?sh=5f85335f15cd

Changes in your portfolio are a constant. To accelerate growth while proactively identifying risk, you’ll need a well-informed portfolio risk management strategy. What is portfolio risk management? Portfolio risk management is the process of identifying, assessing, and mitigating risks within a portfolio. It involves implementing strategies that allow lenders to make more informed decisions, such as whether to offer additional credit products to customers or identify credit problems before they impact their bottom line. Leveraging the right portfolio risk management solution Traditional approaches to portfolio risk management may lack a comprehensive view of customers. To effectively mitigate risk and maximize revenue within your portfolio, you’ll need a portfolio risk management tool that uses expanded customer data, advanced analytics, and modeling. Expanded data. Differentiated data sources include marketing data, traditional credit and trended data, alternative financial services data, and more. With robust consumer data fueling your portfolio risk management solution, you can gain valuable insights into your customers and make smarter decisions. Advanced analytics. Advanced analytics can analyze large volumes of data to unlock greater insights, resulting in increased predictiveness and operational efficiency. Model development. Portfolio risk modeling methodologies forecast future customer behavior, enabling you to better predict risk and gain greater precision in your decisions. Benefits of portfolio risk management Managing portfolio risk is crucial for any organization. With an advanced portfolio risk management solution, you can: Minimize losses. By monitoring accounts for negative performance, you can identify risks before they occur, resulting in minimized losses. Identify growth opportunities. With comprehensive consumer data, you can connect with customers who have untapped potential to drive cross-sell and upsell opportunities. Enhance collection efforts. For debt portfolios, having the right portfolio risk management tool can help you quickly and accurately evaluate collections recovery. Maximize your portfolio potential Experian offers portfolio risk analytics and portfolio risk management tools that can help you mitigate risk and maximize revenue with your portfolio. Get started today. Learn more

This is the third in a series of blog posts highlighting optimization, artificial intelligence, predictive analytics, and decisioning for lending operations in times of extreme uncertainty. The first post dealt with optimization under uncertainty and the second with predicting consumer payment behavior. In this post I will discuss how well credit scores will work for consumer lenders during and after the COVID-19 crisis and offer some recommendations for what lenders can be doing to measure and manage that model risk in a time like this. Perhaps no analytics innovation has created opportunity for more individuals than the credit score has. The first commercially available credit score was developed by MDS (now part of Experian) in 1987. Soon afterwards FICO® popularized the use of scores that evaluate the risk that a consumer would default on a loan. Prior to that, lending decisions were made by loan officers largely on the basis on their personal familiarity with credit applicants. Using data and analytics to assess risk not only created economic opportunity for millions of borrowers, but it also greatly improved the financial soundness of lending institutions worldwide. Predictive models such as credit scores have become the most critical tools for consumer lending businesses. They determine, among other things, who gets a loan and at what price and how an account such as a credit line is managed through its life cycle. Predictive models are in many cases critical for calculating loan and loss reserves, for stress testing, and for complying with accounting standards. Nearly all lenders rely on generic scores such as the FICO® score and VantageScore® credit score. Most larger companies also have a portfolio of custom scorecards that better predict particular aspects of payment behavior for the customers of interest. So how well are these scorecards likely to perform during and after the current pandemic? The models need to predict consumer credit risk even as: Nearly all consumers change their behaviors in response to the health crisis, Millions of people—in America and internationally—find their income suddenly reduced, and Consumers receive large numbers of accommodations from creditors, who have in turn temporarily changed some of their credit reporting practices in response to guidelines in the federal CARES Act. In an earlier post, I pointed out that there is good reason to believe that credit scores will tend to continue to rank order consumers from most likely to least likely to repay their debts even as we move from the longest economic expansion in history to a period of unforeseen and unexpected challenges. But the interpretation of the score (for example, the log odds or the bad rate) may need to be adjusted. Furthermore, that assumes that the model was working well on a lender’s population before this crisis started. If it has been a long time since a scorecard was validated, that assumption needs to be questioned. Because experts are considering several different scenarios regarding both the immediate and long-term economic impacts of COVID-19, it’s important to have a plan for ongoing monitoring as long as necessary. Some lenders have strong Model Risk Management (MRM) teams complying with requirements from the Federal Reserve, Federal Deposit Insurance Corporation (FDIC), the Office of the Comptroller of the Currency (OCC). Those resources are now stretched thin. Other institutions, with fewer resources for MRM, are now discovering gaps in their model inventories as they implement operational changes. In either case, now’s the time to reassess how well scorecards are working. Good model validation practices are especially critical now if lenders are to continue to make the sound data-driven decisions that promote fairness for consumers and financial soundness for the institution. If you’re a credit risk manager responsible for the generic or custom models driving your lending, servicing, or capital allocation policies, there are several things you can do--starting now--to be sure that your organization can continue to make fair and sound lending decisions throughout this volatile period: Assess your model inventory. Do you have good documentation showing when each of the models in your organization was built? When was it last validated? Assign a level of criticality to each model in use. Starting with your most critical models, perform a baseline validation to determine how the model was performing prior to the global health crisis. It may be prudent to conduct not only your routine validation (verifying that the model was continuing to perform at the beginning of the period) but also a baseline validation with a shortened performance window (such as 6-12 months). That baseline validation will be useful if the downturn becomes a protracted one—in which case your scorecard models should be validated more frequently than usual. A shorter outcome window will allow a timelier assessment of the relationship between the score and the bad rate—which will help you update your lending and servicing policies to prevent losses. Determine if any of your scorecards had deteriorated even before the global pandemic. Consider recalibrating or rebuilding those scorecards. (Use metrics such as the Population Stability Index, the K-S statistic and the Gini Coefficient to help with that decision.) Many lenders chose not to prioritize rebuilding their behavioral scorecards for account management or collections during the longest period of economic growth in memory. Those models may soon be among the most critical models in your organization as you work to maintain the trust of your accountholders while also maintaining your institution’s financial soundness. Once the CARES accommodation period has expired, it will be important to revalidate your models more frequently than in the past—for as long as it takes until consumer behavior normalizes and the economy finds its footing. When you find it appropriate to rebuild a scorecard model, consider whether now is the time to implement ethical and explainable AI. Some of our clients are finding that Machine Learned models are more predictive than traditional scorecards. Early Experian research using data from the last recession indicates this will continue to be true for the foreseeable future. Furthermore, Experian has invested in Research and Development to help these clients deliver FCRA-compliant Adverse Action reasons to their consumers and to make the models explainable and transparent for model risk governance and compliance purposes. The sudden economic volatility that has resulted from this global health crisis has been a shock to all organizations. It is important for lenders to take the pulse of their predictive models now and throughout the downturn. They are especially critical tools for making sound data-driven business decisions until the economy is less volatile. Experian is committed to helping your organization during times of uncertainty. For more resources, visit our Look Ahead 2020 Hub. Learn more