Loading...

A Quick Guide to Model Explainability

by Julie Lee 6 min read January 11, 2024

Model explainability has become a hot topic as lenders look for ways to use artificial intelligence (AI) to improve their decision-making. Within credit decisioning, machine learning (ML) models can often outperform traditional models at predicting credit risk.

ML models can also be helpful throughout the customer lifecycle, from marketing and fraud detection to collections optimization. However, without explainability, using ML models may result in unethical and illegal business practices.

What is model explainability? 

Broadly defined, model explainability is the ability to understand and explain a model’s outputs at either a high level (global explainability) or for a specific output (local explainability).1

  • Local vs global explanation: Global explanations attempt to explain the main factors that determine a model’s outputs, such as what causes a credit score to rise or fall. Local explanations attempt to explain specific outputs, such as what leads to a consumer’s credit score being 688. But it’s not an either-or decision — you may need to explain both.

Model explainability can also have varying definitions depending on who asks you to explain a model and how detailed of a definition they require. For example, a model developer may require a different explanation than a regulator.

Model explainability vs interpretability

Some people use model explainability and interpretability interchangeably. But when the two terms are distinguished, model interpretability may refer to how easily a person can understand and explain a model’s decisions.2 We might call a model interpretable if a person can clearly understand:

  • The features or inputs that the model uses to make a decision.
  • The relative importance of the features in determining the outputs.
  • What conditions can lead to specific outputs.

Both explainability and interpretability are important, especially for credit risk models used in credit underwriting. However, we will use model explainability as an overarching term that encompasses an explanation of a model’s outputs and interpretability of its internal workings below.

ML models highlight the need for explainability in finance

Lenders have used credit risk models for decades. Many of these models have a clear set of rules and limited inputs, and they might be described as self-explanatory. These include traditional linear and logistic regression models, scorecards and small decision trees.3

AI analytics solutions, such as ML-powered credit models, have been shown to better predict credit risk. And most financial institutions are increasing their budgets for advanced analytics solutions and see their implementation as a top priority.4 

However, ML models can be more complex than traditional models and they introduce the potential of a “black box.” In short, even if someone knows what goes into and comes out of the model, it’s difficult to explain what’s happening without an in-depth analysis.

Lenders now have to navigate a necessary trade-off. ML-powered models may be more predictive, but regulatory requirements and fair lending goals require lenders to use explainable models.

READ MORE: Explainability: ML and AI in credit decisioning

Why is model explainability required?

Model explainability is necessary for several reasons:

  • To comply with regulatory requirements: Decisions made using ML models need to comply with lending and credit-related, including the Fair Credit Reporting Act (FCRA) and Equal Credit Opportunity Act (ECOA). Lenders may also need to ensure their ML-driven models comply with newer AI-focused regulations, such as the AI Bill of Rights in the U.S. and the E.U. AI Act.
  • To improve long-term credit risk management: Model developers and risk managers may want to understand why decisions are being made to audit, manage and recalibrate models
  • To avoid bias: Model explainability is important for ensuring that lenders aren’t discriminating against groups of consumers.
  • To build trust: Lenders also want to be able to explain to consumers why a decision was made, which is only possible if they understand how the model comes to its conclusions.

There’s a real potential for growth if you can create and deploy explainable ML models. In addition to offering a more predictive output, ML models can incorporate alternative credit data* (also known as expanded FCRA-regulated data) and score more consumers than traditional risk models. As a result, the explainable ML models could increase financial inclusion and allow you to expand your lending universe.

READ MORE: Raising the AI Bar

How can you implement ML model explainability?

Navigating the trade-off and worries about explainability can keep financial institutions from deploying ML models. As of early 2023, only 14 percent of banks and 19 percent of credit unions have deployed ML models. Over a third (35 percent) list explainability of machine learning models as one of the main barriers to adopting ML.5 

Although a cautious approach is understandable and advisable, there are various ways to tackle the explainability problem. One major differentiator is whether you build explainability into the model or try to explain it post hoc—after it’s trained.

Using post hoc explainability

Complex ML models are, by their nature, not self-explanatory. However, several post hoc explainability techniques are model agnostic (they don’t depend on the model being analyzed) and they don’t require model developers to add specific constraints during training.

Shapley Additive Explanations (SHAP) is one used approach. It can help you understand the average marginal contribution features to an output. For instance, how much each feature (input) affected the resulting credit score.

The analysis can be time-consuming and expensive, but it works with black box models even if you only know the inputs and outputs. You can also use the Shapley values for local explanations, and then extrapolate the results for a global explanation.

Other post hoc approaches also might help shine a light into a black box model, including partial dependence plots and local interpretable model-agnostic explanations (LIME).

READ MORE: Getting AI-driven decisioning right in financial services 

Build explainability into model development

Post hoc explainability techniques have limitations and might not be sufficient to address some regulators’ explainability and transparency concerns.6 Alternatively, you can try to build explainability into your models. Although you might give up some predictive power, the approach can be a safer option. 

For instance, you can identify features that could potentially lead to biased outcomes and limit their influence on the model. You can also compare the explainability of various ML-based models to see which may be more or less inherently explainable. For example, gradient boosting machines (GBMs) may be preferable to neural networks for this reason.7

You can also use ML to blend traditional and alternative credit data, which may provide a significant lift — around 60 to 70 percent compared to traditional scorecards — while maintaining explainability.8

READ MORE:Journey of an ML Model 

How Experian can help

As a leader in machine learning and analytics, Experian partners with financial institutions to create, test, validate, deploy and monitor ML-driven models. Learn how you can build explainable ML-powered models using credit bureau, alternative credit, third-party and proprietary data. And monitor all your ML models with a web-based platform that helps you track performance, improve drift and prepare for compliance and audit requests.

*When we refer to “Alternative Credit Data,” this refers to the use of alternative data and its appropriate use in consumer credit lending decisions, as regulated by the Fair Credit Reporting Act. Hence, the term “Expanded FCRA Data” may also apply and can be used interchangeably.

1-3. FinRegLab (2021). The Use of Machine Learning for Credit Underwriting

4. Experian (2022). Explainability: ML and AI in credit decisioning

5. Experian (2023). Finding the Lending Diamonds in the Rough

6. FinRegLab (2021). The Use of Machine Learning for Credit Underwriting

7. Experian (2022). Explainability: ML and AI in credit decisioning

8. Experian (2023). Raising the AI Bar

Related Posts

Model inventories are rapidly expanding. AI-enabled tools are entering workflows that were once deterministic and decisioning environments are more interconnected than ever. At the same time, regulatory scrutiny around model risk management continues to intensify. In many institutions, classification determines validation depth, monitoring intensity, and escalation pathways while informing board reporting. If classification is wrong, every downstream control is misaligned. And, in 2026, model classification is no longer just about assigning a tier, but rather about understanding data lineage, use case evolution, interdependencies, and governance accountability in a decentralized, AI-driven environment. We recently spoke with Mark Longman, Director of Analytics and Regulatory Technology, and here are some of his thoughts around five blind spots risk and compliance leaders should consider addressing now. 1. The “Set It and Forget It” Mentality The Blind Spot Model classification frameworks are often designed during a regulatory remediation effort or inventory modernization initiative. Once documented and approved, they can remain largely unchanged for years. However, model risk management is an ongoing process. “There’s really no sort of one and done when it comes to model risk management,” said Longman. Why It Matters Classification is not merely descriptive, it’s prescriptive. It drives the depth of validation, the frequency of monitoring, the intensity of governance oversight and the level of senior management visibility. As Longman notes, data fragmentation is compounding the challenge. “There’s data everywhere – internal, cloud, even shadow IT – and it’s tough to get a clear view into the inputs into the models,” he said. When inputs are unclear, tiering becomes inherently subjective and if classification frameworks are not reviewed regularly, governance intensity can become misaligned with real exposure. Therefore, static classification is a growing risk, especially in a world of rapidly expanding AI use cases. In a supervisory environment that continues to scrutinize model definitions, particularly as AI tools proliferate, a dynamic, periodically refreshed classification process can demonstrate institutional vigilance. 2. Assuming Third-Party Models Reduce Governance Accountability The Blind SpotThere is often an implicit belief that vendor-provided models carry less governance burden because they were developed externally. Why It Matters Vendor provided models continue to grow, particularly in AI-driven solutions, but supervisory expectations remain firm. “Third-party models do not diminish the responsibility of the institution for its governance and oversight of the model – whether it’s monitoring, ongoing validation, just evaluating drift model documentation,” Longman said. “The board and senior managers are responsible to make sure that these models are performing as expected and that includes third-party models.” Regulators consistently emphasize that institutions remain responsible for the outcomes produced by models used in their decisioning environments, regardless of origin. If a vendor model influences credit approvals, pricing, fraud decisions, or capital calculations, it directly affects customers, financial performance and compliance exposure. Treating third-party models as inherently lower risk can also distort internal tiering frameworks. When vendor models are under-classified, validation depth and monitoring rigor may be insufficient relative to their true impact. 3. Limited Situational Awareness of Model Interdependencies The Blind Spotfeed multiple downstream models simultaneously. Why It Matters Risk often flows across interdependencies. When upstream models degrade in performance or introduce bias, downstream models inherit that exposure. If multiple material decisions depend on the same data transformation or feature engineering process, concentration risk emerges. Without visibility into these dependencies, tiering assessments may underestimate cumulative risk, and monitoring frameworks may fail to detect systemic vulnerabilities. “There has to be a holistic view of what models are being used for – and really somebody to ensure there’s not that overlap across models,” Longman said. Supervisors are increasingly interested in understanding how model risk propagates through business processes. When institutions cannot articulate how models interact, it raises broader concerns about situational awareness and control effectiveness. Therefore, capturing interdependencies within the classification framework enhances more than documentation. It enables more accurate tiering, more targeted monitoring and more informed governance oversight. 4. Excluding Models Without Defensible Rationale The Blind SpotGray-area tools frequently sit outside formal inventories: rule-based engines, spreadsheet models, scenario calculators, heuristic decision aids, or emerging AI tools used for analysis and summarization. These tools may not neatly fit legacy definitions of a “model,” and so they are sometimes excluded without robust documentation. Why It Matters Regulatory definitions of “model” have broadened over time. What creates risk is the absence of defensible reasoning and documentation. Longman describes the risk clearly: “Some [teams] are deploying AI solutions that are sort of unbeknownst to the model risk management community – and almost creating what you might think of as a shadow model inventory.” Without visibility, institutions cannot confidently characterize use, trace inputs, or assign appropriate tiers, according to Longman. It also undermines the credibility of the official inventory during examinations. A well-governed program can articulate why certain tools fall outside model risk management scope, referencing documented criteria aligned with regulatory guidance. Without that evidence, exclusions can appear arbitrary, suggesting gaps in oversight. 5. Inconsistent or Subjective Classification Frameworks The Blind SpotAs inventories scale and governance teams expand, classification decisions are often distributed across reviewers. Over time, discrepancies can emerge. Why It Matters Inconsistency undermines both risk management and regulatory confidence. If two models with comparable use cases and impact profiles are assigned different tiers without clear justification, it signals that the framework is not being applied uniformly. AI adds even more complexity. When it comes to emerging AI model governance versus traditional model governance, there’s a lot to unpack, says Longman: “The AI models themselves are a lot more complicated than your traditional logistic or multiple regression models. The data, the prompting, you need to monitor the prompts that the LLMs for example are responding to and you need to make sure you can have what you may think of as prompt drift,” Longman said. As frameworks evolve, particularly to incorporate AI, automation, and new regulatory interpretations, institutions must ensure that changes are cascaded across the entire inventory. Partial updates or selective reclassification introduce fragmentation. Longman recommends formalizing classification through a structured decision tree embedded in policy to ensure consistent outcomes across business units. Beyond clear documentation, a strong classification program is applied consistently, measured objectively, and periodically reassessed across the full portfolio. BONUS – 6. Elevating Classification with Data-Level Visibility Some institutions are extending classification discipline beyond models to the data layer itself. Longman describes organizations that maintain not only a model inventory, but a data inventory, mapping variables to the models they influence. This approach allows institutions to quickly assess downstream effects when operational or environmental changes occur including system updates or even natural disasters affecting payment behavior. In an AI-driven environment, traceability may become a competitive differentiator. Conclusion Model classification is foundational. It determines how risk is measured, monitored, escalated, and reported. In a rapidly evolving regulatory and technological environment, it cannot remain static. Institutions that invest now in transparency, consistency, and data-level visibility will not only reduce supervisory friction – they will build a governance framework capable of supporting the next generation of AI-enabled decisioning. Learn more

by Stefani Wendel 6 min read March 20, 2026

Three winners were announced at Experian’s inaugural Vision Awards ceremony held on Tuesday, October 7 in front of more than 800 attendees at Experian’s Vision Conference held in Miami, Fla. Figure, PREMIER Bankcard and Members First Credit Union were recognized for their work in artificial intelligence, innovation and financial empowerment. The four-day gathering provided a dynamic forum for exploring the latest innovations shaping the future of data-driven decisioning. “Our Vision Awards celebrate the unique impact financial industry leaders can have when data, technology and purpose align,” said Jeff Softley, CEO, Experian North America. “We are proud to recognize these three organizations with whom we collaborate to drive opportunities and help create change for society as a whole.” The Vision Awards recognize the achievements of organizations that accelerate action. These forward-thinking institutions leverage artificial intelligence, innovation and financial empowerment to drive opportunities and create actionable change for consumers, businesses and society. Recognizing Leaders in AI, Innovation, and Financial Empowerment A panel of interdisciplinary judges reviewed nominations from across industries across the regions, evaluating submissions based on rigor, originality, and impact. The 2025 winners reflect how organizations are leveraging data and technology to advance innovation and inclusion. Excellence in AI: Figure Figure’s submission showcased how it has redefined consumer lending outreach through an AI-driven targeting engine powered by more than 90 machine learning models and 5,000+ behavioral and financial features. By combining Experian’s prescreen data with proprietary insights, Figure delivers highly precise, cost-efficient firm offers of credit — helping it become one of the top three home equity line of credit lenders in the U.S. “This win reflects more than just a successful application of AI. It represents the broader innovative culture deeply embedded in our company’s DNA,” said Ruben Padron, Chief Data Officer at Figure. “Our work with Experian has been instrumental in helping us assess creditworthiness and predict borrower intent with greater precision.” Excellence in Innovation: PREMIER Bankcard PREMIER Bankcard continues to demonstrate how financial inclusion and innovation go hand in hand. From modernizing its technology to reimagining its product suite, PREMIER has made bold strides to serve the underserved and democratize access to credit. “This award affirms our belief that financial inclusion and innovation must go hand in hand,” said Chris Thornton, Senior Vice President of Credit at PREMIER Bankcard. “We’re committed to reaching those who need it most, and Experian has proven to be an exceptional partner in that mission.” With more than 30 million customers served, PREMIER has become a leader in first-time and second-chance credit, while also giving back more than $4 billion to charitable causes through its partnership with First PREMIER Bank and founder Denny Sanford. “We’re here to change lives,” Thornton added. “That’s how we measure success — and that’s ultimately what we’re investing in.” Excellence in Financial Empowerment: Members First Credit Union Members First Credit Union was honored for its commitment to inclusive lending and community development across Michigan. In 2024 alone, the credit union’s programs helped thousands of members access fair and affordable credit, supported 166 community organizations, and contributed nearly $230,000 in donations — backed by 2,000 volunteer hours from its employees. “Our impact demonstrates how mission-driven financial institutions can meaningfully expand access, strengthen communities, and foster long-term financial health,” said Carrie Iafrate, CEO/President at Members First Credit Union. “We’re honored to receive this recognition and inspired to continue helping individuals thrive financially.” Honoring the Judges Behind the Vision The 2025 Vision Awards were evaluated by a distinguished panel of judges representing both Experian and external associations and partners in the financial inclusion community, including: Lisa Cantu-Parks, Vice President of Resource Development, Unidos Jean Carlos Rosario Mercado, Juntos Avanzamos Program Officer, Inclusiv Ian P. Moloney, Senior Vice President, Head of Policy and Regulatory Affairs, American Fintech Council Marc Morial, President and CEO, National Urban League Kevin O’Connor, Senior Vice President, Membership and Sponsorship, Consumer Bankers Association Their expertise ensured that the winners reflect the industry’s highest standards of innovation, integrity, and impact. Ian P. Moloney, Senior Vice President, Head of Policy and Regulatory Affairs, American Fintech Council, and Rhonda Spears Bell, Senior Vice President and Chief Marketing Officer, National Urban League, were at the recognition session at Vision and shared about their organizations and experience serving as a judge. Video messages were also shared from Jean Carlos Rosario Mercado of Inclusiv and Kevin O’Connor of Consumer Bankers Association, who were unable to attend the live event. “I greatly appreciated the opportunity to participate as a judge in the Experian Vision Awards because it provided me a chance to look beyond my usual day-to-day, and understand the myriad of innovations and projects going on to help consumers and the industry,” Moloney said. “The award winners tonight showcase the best of our industry, and I appreciate the opportunity to take part in highlighting their success.” “I’m inspired by the outstanding organizations we’re celebrating tonight - each making a lasting impact in our country and globally,” Spears Bell said. “I want to take a moment to recognize Experian - not only as a valued corporate partner, but as a true ally in our mission to advance financial literacy, stability, and generational wealth.” Looking Ahead: Vision Awards 2026 Experian will continue to champion progress in financial services and across all industries, and the Vision Awards offers one of the avenues through which the industry can recognize organizations driving change through responsible innovation. Submissions for the 2026 Vision Awards open on June 1, 2026. To learn more about this year’s winners and how to apply for next year’s program, visit the Vision Awards page.

by Stefani Wendel 6 min read October 14, 2025

AI credit scoring addresses traditional limitations by introducing more advanced, data-driven techniques. Learn the benefits and challenges.

by Laura Burrows 6 min read September 24, 2025