Loading...

Predicting Consumer Payment Behavior in a Time of Extreme Uncertainty

by Jim Bander 4 min read April 20, 2020

This is the second in a series of blog posts highlighting optimization, artificial intelligence, predictive analytics, and decisioning for lending operations in times of extreme uncertainty. The first post dealt with optimization under uncertainty.

The word “unprecedented” gets thrown around pretty carelessly these days. When I hear that word, I think fondly of my high school history teacher.  Mr. Fuller had a sign on his wall quoting the philosopher-poet George Santayana: “Those who cannot remember the past are condemned to repeat it.” Some of us thought it meant we had to memorize as many facts as possible so we wouldn’t have to go to summer school.

The COVID-19 crisis–with not only health consequences but also accompanying economic and financial impacts–certainly breaks with all precedents.  The bankers and other businesspeople I’ve been listening to are rightly worried that This Time is Different. While I’m sure there are history teachers who can name the last time a global disaster led to a widescale humanitarian crisis and an economic and financial downturn, I’m even more sure times have changed a lot since then.

But there are plenty of recent precedents to guide business leaders and other policymakers through this crisis. Hurricanes Katrina and Sandy impacted large regions of the United States, with terrible human consequences followed by financial ones. Dozens of local disasters—floods, landslides, earthquakes—devastated smaller numbers of people in equally profound ways. The Great Recession, starting in 2008, put millions of Americans and others around the world out of work. Each of those disasters, like this one, broke with all precedents in various ways. Each of those events was in many ways a dress rehearsal, as bankers and other lenders learned to provide assistance to distressed businesses and consumers, while simultaneously planning for the inevitable changes to their balance sheets and income statements.

Of course, the way we remember the past has changed. Just as most of us no longer memorize dates–we search for them on the web–businesspeople turn to their databases and use analytics to understand history.

I’ve been following closely as the data engineers and data scientists here at Experian have worked on perhaps their most important problem ever. Using Experian’s Ascend Analytical Sandbox–named last year as the Best Overall Analytics Platform, they combed through over eighteen years of anonymized historical data covering every credit report in the United States. They asked–using historical experience, wisdom, time-consuming analytics, a little artificial intelligence, and a lot of hard work–whether predicting credit performance during and after a crisis is possible. They even considered scenarios regarding what happens as creditors change the way they report consumer delinquencies to the credit bureaus.

After weeks of sleepless nights, they wrote down their conclusions.  I’ve read their analysis carefully and I’m pleased to report that it says…Drumroll, please…Yes, but.

Yes, it’s possible to predict consumer behavior after a disaster. But not in precisely the same way those predictions are made during a period of economic growth. For a credit risk manager to review a lending portfolio and to predict its credit losses after a crisis requires looking at more data–and looking at it a little differently–than during other periods.

Yes, after each disaster, credit scores like FICO® and VantageScore® credit scores continued to rank consumers from most likely to least likely to repay debts. But the interpretation of the score changes. Technically speaking, there is a substantial shift in the odds ratio that is particularly pronounced when a score is applied to subprime consumers. To predict borrower behavior more accurately, our scientists found that it helps to look at ten additional categories of data attributes and a few additional types of mathematical models.

Yes, there are attributes on the credit report that help lenders identify consumer distress, willingness, and ability to pay. But, the data engineers identified that during times like these it is especially helpful to look beyond a single point in time; trends in a consumer’s payment history help understand whether that customer is changing their typical behavior.

Yes, the data reported to the credit bureaus is predictive, especially over time. But when expanded FCRA data is available beyond what is traditionally reported to a bureau, that data further improves predictions.

All told, the data engineers found over 140 data attributes that can help lenders and others better manage their portfolio risk, understand consumer behavior, appreciate how the market is changing, and choose their next best action. The list of attributes might be indispensable to a credit data specialist whose institution needs to weather the coming storm.

Because Experian knows how important it is to learn from historical precedents, we’re sharing the list at no charge with qualified risk managers. To get the latest Experian data and insights or to request the Crisis Response Attributes recommendation, visit our Look Ahead 2020 page.

Learn more

Related Posts

Model inventories are rapidly expanding. AI-enabled tools are entering workflows that were once deterministic and decisioning environments are more interconnected than ever. At the same time, regulatory scrutiny around model risk management continues to intensify. In many institutions, classification determines validation depth, monitoring intensity, and escalation pathways while informing board reporting. If classification is wrong, every downstream control is misaligned. And, in 2026, model classification is no longer just about assigning a tier, but rather about understanding data lineage, use case evolution, interdependencies, and governance accountability in a decentralized, AI-driven environment. We recently spoke with Mark Longman, Director of Analytics and Regulatory Technology, and here are some of his thoughts around five blind spots risk and compliance leaders should consider addressing now. 1. The “Set It and Forget It” Mentality The Blind Spot Model classification frameworks are often designed during a regulatory remediation effort or inventory modernization initiative. Once documented and approved, they can remain largely unchanged for years. However, model risk management is an ongoing process. “There’s really no sort of one and done when it comes to model risk management,” said Longman. Why It Matters Classification is not merely descriptive, it’s prescriptive. It drives the depth of validation, the frequency of monitoring, the intensity of governance oversight and the level of senior management visibility. As Longman notes, data fragmentation is compounding the challenge. “There’s data everywhere – internal, cloud, even shadow IT – and it’s tough to get a clear view into the inputs into the models,” he said. When inputs are unclear, tiering becomes inherently subjective and if classification frameworks are not reviewed regularly, governance intensity can become misaligned with real exposure. Therefore, static classification is a growing risk, especially in a world of rapidly expanding AI use cases. In a supervisory environment that continues to scrutinize model definitions, particularly as AI tools proliferate, a dynamic, periodically refreshed classification process can demonstrate institutional vigilance. 2. Assuming Third-Party Models Reduce Governance Accountability The Blind SpotThere is often an implicit belief that vendor-provided models carry less governance burden because they were developed externally. Why It Matters Vendor provided models continue to grow, particularly in AI-driven solutions, but supervisory expectations remain firm. “Third-party models do not diminish the responsibility of the institution for its governance and oversight of the model – whether it’s monitoring, ongoing validation, just evaluating drift model documentation,” Longman said. “The board and senior managers are responsible to make sure that these models are performing as expected and that includes third-party models.” Regulators consistently emphasize that institutions remain responsible for the outcomes produced by models used in their decisioning environments, regardless of origin. If a vendor model influences credit approvals, pricing, fraud decisions, or capital calculations, it directly affects customers, financial performance and compliance exposure. Treating third-party models as inherently lower risk can also distort internal tiering frameworks. When vendor models are under-classified, validation depth and monitoring rigor may be insufficient relative to their true impact. 3. Limited Situational Awareness of Model Interdependencies The Blind Spotfeed multiple downstream models simultaneously. Why It Matters Risk often flows across interdependencies. When upstream models degrade in performance or introduce bias, downstream models inherit that exposure. If multiple material decisions depend on the same data transformation or feature engineering process, concentration risk emerges. Without visibility into these dependencies, tiering assessments may underestimate cumulative risk, and monitoring frameworks may fail to detect systemic vulnerabilities. “There has to be a holistic view of what models are being used for – and really somebody to ensure there’s not that overlap across models,” Longman said. Supervisors are increasingly interested in understanding how model risk propagates through business processes. When institutions cannot articulate how models interact, it raises broader concerns about situational awareness and control effectiveness. Therefore, capturing interdependencies within the classification framework enhances more than documentation. It enables more accurate tiering, more targeted monitoring and more informed governance oversight. 4. Excluding Models Without Defensible Rationale The Blind SpotGray-area tools frequently sit outside formal inventories: rule-based engines, spreadsheet models, scenario calculators, heuristic decision aids, or emerging AI tools used for analysis and summarization. These tools may not neatly fit legacy definitions of a “model,” and so they are sometimes excluded without robust documentation. Why It Matters Regulatory definitions of “model” have broadened over time. What creates risk is the absence of defensible reasoning and documentation. Longman describes the risk clearly: “Some [teams] are deploying AI solutions that are sort of unbeknownst to the model risk management community – and almost creating what you might think of as a shadow model inventory.” Without visibility, institutions cannot confidently characterize use, trace inputs, or assign appropriate tiers, according to Longman. It also undermines the credibility of the official inventory during examinations. A well-governed program can articulate why certain tools fall outside model risk management scope, referencing documented criteria aligned with regulatory guidance. Without that evidence, exclusions can appear arbitrary, suggesting gaps in oversight. 5. Inconsistent or Subjective Classification Frameworks The Blind SpotAs inventories scale and governance teams expand, classification decisions are often distributed across reviewers. Over time, discrepancies can emerge. Why It Matters Inconsistency undermines both risk management and regulatory confidence. If two models with comparable use cases and impact profiles are assigned different tiers without clear justification, it signals that the framework is not being applied uniformly. AI adds even more complexity. When it comes to emerging AI model governance versus traditional model governance, there’s a lot to unpack, says Longman: “The AI models themselves are a lot more complicated than your traditional logistic or multiple regression models. The data, the prompting, you need to monitor the prompts that the LLMs for example are responding to and you need to make sure you can have what you may think of as prompt drift,” Longman said. As frameworks evolve, particularly to incorporate AI, automation, and new regulatory interpretations, institutions must ensure that changes are cascaded across the entire inventory. Partial updates or selective reclassification introduce fragmentation. Longman recommends formalizing classification through a structured decision tree embedded in policy to ensure consistent outcomes across business units. Beyond clear documentation, a strong classification program is applied consistently, measured objectively, and periodically reassessed across the full portfolio. BONUS – 6. Elevating Classification with Data-Level Visibility Some institutions are extending classification discipline beyond models to the data layer itself. Longman describes organizations that maintain not only a model inventory, but a data inventory, mapping variables to the models they influence. This approach allows institutions to quickly assess downstream effects when operational or environmental changes occur including system updates or even natural disasters affecting payment behavior. In an AI-driven environment, traceability may become a competitive differentiator. Conclusion Model classification is foundational. It determines how risk is measured, monitored, escalated, and reported. In a rapidly evolving regulatory and technological environment, it cannot remain static. Institutions that invest now in transparency, consistency, and data-level visibility will not only reduce supervisory friction – they will build a governance framework capable of supporting the next generation of AI-enabled decisioning. Learn more

by Stefani Wendel 4 min read March 20, 2026

Fraud is evolving faster than ever, driven by digitalization, real-time payments and increasingly sophisticated scams. For Warren Jones and his team at Santander Bank, staying ahead requires more than tools. It requires the right partner. The partnership with Santander Bank began nearly a decade ago, during a period of rapid change in the fraud and banking landscape. Since then, the relationship has grown into a long-term collaboration focused on continuous improvement and innovation. Experian products helped Santander address one of its most pressing operational challenges: a high-volume manual review queue for new account applications. While the vast majority of alerts in the queue were fraudulent and ultimately declined, a small percentage represented legitimate customers whose account openings were delayed. This created inefficiencies for staff and a poor first impression of genuine applicants. We worked alongside Santander to tackle this challenge head-on, transforming how applications were reviewed, how fraud was detected and how legitimate customers were approved. In addition to fraud prevention, implementing Experian's Ascend PlatformTM, with its intuitive user experience and robust data environment, has unlocked additional value across the organization. The platform supports multiple use cases, enabling collaboration between fraud and marketing teams to align strategies based on actionable insights. Learn more about our Ascend Platform

by Zohreen Ismail 4 min read February 18, 2026

For many banks, first-party fraud has become a silent drain on profitability. On paper, it often looks like classic credit risk: an account books, goes delinquent, and ultimately charges off. But a growing share of those early charge-offs is driven by something else entirely: customers who never intended to pay you back. That distinction matters. When first-party fraud is misclassified as credit risk, banks risk overstating credit loss, understating fraud exposure, and missing opportunities to intervene earlier.  In our recent Consumer Banker Association (CBA) partner webinar, “Fraud or Financial Distress? How to Differentiate Fraud and Credit Risk Early,” Experian shared new data and analytics to help fraud, risk and collections leaders see this problem more clearly. This post summarizes key themes from the webinar and points you to the full report and on-demand webinar for deeper insight. Why first-party fraud is a growing issue for banks  Banks are seeing rising early losses, especially in digital channels. But those losses do not always behave like traditional credit deterioration. Several trends are contributing:  More accounts opened and funded digitally  Increased use of synthetic or manipulated identities  Economic pressure on consumers and small businesses  More sophisticated misuse of legitimate credentials  When these patterns are lumped into credit risk, banks can experience:  Inflation of credit loss estimates and reserves  Underinvestment in fraud controls and analytics  Blurred visibility into what is truly driving performance   Treating first-party fraud as a distinct problem is the first step toward solving it.  First-payment default: a clearer view of intent  Traditional credit models are designed to answer, “Can this customer pay?” and “How likely are they to roll into delinquency over time?” They are not designed to answer, “Did this customer ever intend to pay?” To help banks get closer to that question, Experian uses first-payment default (FPD) as a key indicator. At a high level, FPD focuses on accounts that become seriously delinquent early in their lifecycle and do not meaningfully recover.  The principle is straightforward:  A legitimate borrower under stress is more likely to miss payments later, with periods of cure and relapse.  A first-party fraudster is more likely to default quickly and never get back on track.  By focusing on FPD patterns, banks can start to separate cases that look like genuine financial distress from those that are more consistent with deceptive intent.  The full report explains how FPD is defined, how it varies by product, and how it can be used to sharpen bank fraud and credit strategies. Beyond FPD: building a richer fraud signal  FPD alone is not enough to classify first-party fraud. In practice, leading banks are layering FPD with behavioral, application and identity indicators to build a more reliable picture. At a conceptual level, these indicators can include:  Early delinquency and straight-roll behavior  Utilization and credit mix that do not align with stated profile  Unusual income, employment, or application characteristics High-risk channels, devices, or locations at application Patterns of disputes or behaviors that suggest abuse  The power comes from how these signals interact, not from any one data point. The report and webinar walk through how these indicators can be combined into fraud analytics and how they perform across key banking products.  Why it matters across fraud, credit and collections Getting first-party fraud right is not just about fraud loss. It impacts multiple parts of the bank. Fraud strategy Well-defined quantification of first-party fraud helps fraud leaders make the case for investments in identity verification, device intelligence, and other early lifecycle controls, especially in digital account opening and digital lending. Credit risk and capital planning When fraud and credit losses are blended, credit models and reserves can be distorted. Separating first-party fraud provides risk teams a cleaner view of true credit performance and supports better capital planning.  Collections and customer treatment Customers in genuine financial distress need different treatment paths than those who never intended to pay. Better segmentation supports more appropriate outreach, hardship programs, and collections strategies, while reserving firmer actions for abuse.  Executive and board reporting Leadership teams increasingly want to understand what portion of loss is being driven by fraud versus credit. Credible data improves discussions around risk appetite and return on capital.  What leading banks are doing differently  In our work with financial institutions, several common practices have emerged among banks that are getting ahead of first-party fraud: 1. Defining first-party fraud explicitly They establish clear definitions and tracking for first-party fraud across key products instead of leaving it buried in credit loss categories.  2. Embedding FPD segmentation into analytics They use FPD-based views in their monitoring and reporting, particularly in the first 6–12 months on book, to better understand early loss behavior.  3. Unifying fraud and credit decisioning Rather than separate strategies that may conflict, they adopt a more unified decisioning framework that considers both fraud and credit risk when approving accounts, setting limits and managing exposure.  4. Leveraging identity and device data They bring in noncredit data — identity risk, device intelligence, application behavior — to complement traditional credit information and strengthen models.  5. Benchmarking performance against peers They use external benchmarks for first-party fraud loss rates and incident sizes to calibrate their risk posture and investment decisions.  The post is meant as a high-level overview. The real value for your teams will be in the detailed benchmarks, charts and examples in the full report and the discussion in the webinar.  If your teams are asking whether rising early losses are driven by fraud or financial distress, this is the moment to look deeper at first-party fraud.  Download the report: “First-party fraud: The most common culprit”  Explore detailed benchmarks for first-party fraud across banking products, see how first-payment default and other indicators are defined and applied, and review examples you can bring into your own internal discussions.  Download the report Watch the on-demand CBA webinar: “Fraud or Financial Distress? How to Differentiate Fraud and Credit Risk Early”  Hear Experian experts walk through real bank scenarios, FPD analytics and practical steps for integrating first-party fraud intelligence into your fraud, credit, and collections strategies.  Watch the webinar First-party fraud is likely already embedded in your early credit losses. With the right analytics and definitions, banks can uncover the true drivers, reduce hidden fraud exposure, and better support customers facing genuine financial hardship.

by Brittany Ennis 4 min read February 12, 2026