Loading...

Optimizing Lending Operations in a Time of Extreme Uncertainty

by Jim Bander 4 min read April 14, 2020

This is the first to a series of blog posts highlighting optimization, artificial intelligence, predictive analytics, and decisioning for lending operations in times of extreme uncertainty.

Like all businesses, lenders are facing tremendous change and uncertainty in the face of the COVID-19 crisis.  While focusing first on how to keep their employees and customers safe during the new normal, they are asking how to make data-driven decisions in this new environment.  It’s only natural that business people are skeptical about whether analytics will work in a situation like today’s – in which the data deviate from all historical precedents.  Certainly, nobody predicted, for example, that the number of loans with forbearance requests would increase by over 1000% during each two-week period in March. Can anyone possibly make an optimized decision when things are changing so quickly and when so many things are unknown?

Prescriptive analytics – also known as mathematical optimization – is the practice of developing a business strategy to achieve a business objective subject to capacity and other constraints, often using a demand forecast. For example, banks use optimization software to develop marketing and debt management strategies to run their lending operations.  But what happens when the demand forecast might be wrong, when the constraints change quickly, and when decision-makers cannot agree on a single objective? The reality is that decisionmakers have to balance multiple competing objectives related to many different stakeholders. And, especially during the COVID-19 crisis and the period of change that will certainly follow, they have to do so in the face of uncertainty.

Let’s discuss some of the methods that analysts use to control risk while optimizing lending practices during times like these. These techniques, collectively known as robust optimization and robust statistics, help lenders and other business people deal with the uncomfortable reality that we do not know what the future holds.  

Consider a hypothetical bank or other lender servicing a portfolio of consumer loans and forecasting its loss performance in this environment. Management probably has several competing objectives: they want to improve service levels on their digital channel, they want to minimize credit and fraud losses, they’re facing a reduced operating budget, and they’re not certain how many employees they will have and which vendors will be able to provide adequate service levels. Furthermore, they anticipate new and unpredicted changes, and they need to be able to update their strategies quickly.

The mathematics can be quite technical, but Experian’s Marketswitch Optimization is user-friendly software to help businesspeople–not engineers–design and deploy optimal strategies for practices such as Account Management and Loan Originations while facing such a dynamic and uncertain environment. The bank’s business analysts (not computer specialists or mathematicians) will use techniques such as these:

  • With Sensitivity Analysis, the analysts will explore the performance of their optimized Account Management, Collections, and Loan Originations strategies while considering possible changes in input variables.
  • Optimization Scenarios with Uncertainty (technically known as Stochastic Optimization) allow the managers and analysts to design operational strategies that control risk, particularly the bank’s exposure to probabilistic and worst-case scenarios.
  • Using Scenario Performance Analysis, the lender’s team will validate and test their optimization scenarios against a variety of different data sets to understand how their strategies would perform in each case.
  • Model Quality Evaluation techniques help the credit risk managers compare model predictions against actual performance during a quickly changing economy.
  • Model impact analysis (related to Model Risk Management) helps senior leadership assess when it is time to invest in improving its statistical models.
  • Robust Model Calibration Analysis removes unjustifiable variations in the lender’s predictive models to make their predictions more valid as things change over time.

These six advanced analytics techniques are especially helpful when developing business strategies for a time in which some values are unknown—including future unemployment levels, staffing budgets, data reporting practices, interest rates, and customer demands.  Business decisions can—and arguably must—be optimized during times of uncertainty. But during times like these, it is especially important that the analysts understand how and why to account for the uncertainty in both the data and the models.

Lenders, are you optimizing your servicing and debt management strategies? It has never been more important than now to do so–using the advanced techniques available to manage uncertainty mathematically.

Learn more about how Marketswitch can help you solve complex business problems and meet organizational objectives.

Learn more

Related Posts

Model inventories are rapidly expanding. AI-enabled tools are entering workflows that were once deterministic and decisioning environments are more interconnected than ever. At the same time, regulatory scrutiny around model risk management continues to intensify. In many institutions, classification determines validation depth, monitoring intensity, and escalation pathways while informing board reporting. If classification is wrong, every downstream control is misaligned. And, in 2026, model classification is no longer just about assigning a tier, but rather about understanding data lineage, use case evolution, interdependencies, and governance accountability in a decentralized, AI-driven environment. We recently spoke with Mark Longman, Director of Analytics and Regulatory Technology, and here are some of his thoughts around five blind spots risk and compliance leaders should consider addressing now. 1. The “Set It and Forget It” Mentality The Blind Spot Model classification frameworks are often designed during a regulatory remediation effort or inventory modernization initiative. Once documented and approved, they can remain largely unchanged for years. However, model risk management is an ongoing process. “There’s really no sort of one and done when it comes to model risk management,” said Longman. Why It Matters Classification is not merely descriptive, it’s prescriptive. It drives the depth of validation, the frequency of monitoring, the intensity of governance oversight and the level of senior management visibility. As Longman notes, data fragmentation is compounding the challenge. “There’s data everywhere – internal, cloud, even shadow IT – and it’s tough to get a clear view into the inputs into the models,” he said. When inputs are unclear, tiering becomes inherently subjective and if classification frameworks are not reviewed regularly, governance intensity can become misaligned with real exposure. Therefore, static classification is a growing risk, especially in a world of rapidly expanding AI use cases. In a supervisory environment that continues to scrutinize model definitions, particularly as AI tools proliferate, a dynamic, periodically refreshed classification process can demonstrate institutional vigilance. 2. Assuming Third-Party Models Reduce Governance Accountability The Blind SpotThere is often an implicit belief that vendor-provided models carry less governance burden because they were developed externally. Why It Matters Vendor provided models continue to grow, particularly in AI-driven solutions, but supervisory expectations remain firm. “Third-party models do not diminish the responsibility of the institution for its governance and oversight of the model – whether it’s monitoring, ongoing validation, just evaluating drift model documentation,” Longman said. “The board and senior managers are responsible to make sure that these models are performing as expected and that includes third-party models.” Regulators consistently emphasize that institutions remain responsible for the outcomes produced by models used in their decisioning environments, regardless of origin. If a vendor model influences credit approvals, pricing, fraud decisions, or capital calculations, it directly affects customers, financial performance and compliance exposure. Treating third-party models as inherently lower risk can also distort internal tiering frameworks. When vendor models are under-classified, validation depth and monitoring rigor may be insufficient relative to their true impact. 3. Limited Situational Awareness of Model Interdependencies The Blind Spotfeed multiple downstream models simultaneously. Why It Matters Risk often flows across interdependencies. When upstream models degrade in performance or introduce bias, downstream models inherit that exposure. If multiple material decisions depend on the same data transformation or feature engineering process, concentration risk emerges. Without visibility into these dependencies, tiering assessments may underestimate cumulative risk, and monitoring frameworks may fail to detect systemic vulnerabilities. “There has to be a holistic view of what models are being used for – and really somebody to ensure there’s not that overlap across models,” Longman said. Supervisors are increasingly interested in understanding how model risk propagates through business processes. When institutions cannot articulate how models interact, it raises broader concerns about situational awareness and control effectiveness. Therefore, capturing interdependencies within the classification framework enhances more than documentation. It enables more accurate tiering, more targeted monitoring and more informed governance oversight. 4. Excluding Models Without Defensible Rationale The Blind SpotGray-area tools frequently sit outside formal inventories: rule-based engines, spreadsheet models, scenario calculators, heuristic decision aids, or emerging AI tools used for analysis and summarization. These tools may not neatly fit legacy definitions of a “model,” and so they are sometimes excluded without robust documentation. Why It Matters Regulatory definitions of “model” have broadened over time. What creates risk is the absence of defensible reasoning and documentation. Longman describes the risk clearly: “Some [teams] are deploying AI solutions that are sort of unbeknownst to the model risk management community – and almost creating what you might think of as a shadow model inventory.” Without visibility, institutions cannot confidently characterize use, trace inputs, or assign appropriate tiers, according to Longman. It also undermines the credibility of the official inventory during examinations. A well-governed program can articulate why certain tools fall outside model risk management scope, referencing documented criteria aligned with regulatory guidance. Without that evidence, exclusions can appear arbitrary, suggesting gaps in oversight. 5. Inconsistent or Subjective Classification Frameworks The Blind SpotAs inventories scale and governance teams expand, classification decisions are often distributed across reviewers. Over time, discrepancies can emerge. Why It Matters Inconsistency undermines both risk management and regulatory confidence. If two models with comparable use cases and impact profiles are assigned different tiers without clear justification, it signals that the framework is not being applied uniformly. AI adds even more complexity. When it comes to emerging AI model governance versus traditional model governance, there’s a lot to unpack, says Longman: “The AI models themselves are a lot more complicated than your traditional logistic or multiple regression models. The data, the prompting, you need to monitor the prompts that the LLMs for example are responding to and you need to make sure you can have what you may think of as prompt drift,” Longman said. As frameworks evolve, particularly to incorporate AI, automation, and new regulatory interpretations, institutions must ensure that changes are cascaded across the entire inventory. Partial updates or selective reclassification introduce fragmentation. Longman recommends formalizing classification through a structured decision tree embedded in policy to ensure consistent outcomes across business units. Beyond clear documentation, a strong classification program is applied consistently, measured objectively, and periodically reassessed across the full portfolio. BONUS – 6. Elevating Classification with Data-Level Visibility Some institutions are extending classification discipline beyond models to the data layer itself. Longman describes organizations that maintain not only a model inventory, but a data inventory, mapping variables to the models they influence. This approach allows institutions to quickly assess downstream effects when operational or environmental changes occur including system updates or even natural disasters affecting payment behavior. In an AI-driven environment, traceability may become a competitive differentiator. Conclusion Model classification is foundational. It determines how risk is measured, monitored, escalated, and reported. In a rapidly evolving regulatory and technological environment, it cannot remain static. Institutions that invest now in transparency, consistency, and data-level visibility will not only reduce supervisory friction – they will build a governance framework capable of supporting the next generation of AI-enabled decisioning. Learn more

by Stefani Wendel 4 min read March 20, 2026

Since 1996, The Internal Revenue Service (IRS) has issued more than 27 million individual taxpayer identification numbers (ITINs) –⁠ a 9-digit number used by individuals who are required to file or report taxes in the United States but are not eligible to obtain a Social Security number (SSN). Across the country, ITIN holders are actively contributing to their communities and the U.S. financial system. They pay bills, build businesses, contribute billions in taxes and manage their finances responsibly. Yet despite their clear engagement, many remain underrepresented within traditional lending models.  Lenders have a meaningful opportunity to bridge the gap between intention and impact. By rethinking how ITIN consumers are evaluated and supported, financial institutions can: Reduce barriers that have historically held capable borrowers back Build products that reflect real borrower needs Foster trust and strengthen community relationships Drive sustainable, responsible growth Our latest white paper takes a more holistic look at ITIN consumers, highlighting their credit behaviors, performance patterns and long-term growth potential. The findings reveal a population that is not only financially engaged, but also demonstrating signs of ongoing stability and mobility. A few takeaways include: ITIN holders maintain a lower debt-to-income ratio than SSN consumers. ITIN holders exhibit fewer derogatory accounts (180–⁠400 days past due). After 12 months, 76.9% of ITIN holders remained current on trades, a rate 15% higher than SSN consumers. With deeper insight into this segment, lenders can make more informed, inclusive decisions. Read the full white paper to uncover the trends and opportunities shaping the future of ITIN lending. Download white paper

by Theresa Nguyen 4 min read February 2, 2026

In today’s digital lending landscape, fraudsters are more sophisticated, coordinated, and relentless than ever. For companies like Terrace Finance — a specialty finance platform connecting over 5,000 merchants, consumers, and lenders — effectively staying ahead of these threats is a major competitive advantage. That is why Terrace Finance partnered with NeuroID, a part of Experian, to bring behavioral analytics into their fraud prevention strategy. It has given Terrace’s team a proactive, real-time defense that is transforming how they detect and respond to attacks — potentially stopping fraud before it ever reaches their lending partners. The challenge: Sophisticated fraud in a high-stakes ecosystem Terrace Finance operates in a complex environment, offering financing across a wide range of industries and credit profiles. With applications flowing in from countless channels, the risk of fraud is ever-present. A single fraudulent transaction can damage lender relationships or even cut off financing access for entire merchant groups. According to CEO Andy Hopkins, protecting its partners is a top priority for Terrace:“We know that each individual fraud attack can be very costly for merchants, and some merchants will get shut off from their lending partners because fraud was let through ... It is necessary in this business to keep fraud at a tolerable level, with the ultimate goal to eliminate it entirely.” Prior to NeuroID, Terrace was confident in its ability to validate submitted data. But with concerns about GenAI-powered fraud growing, including the threat of next-generation fraud bots, Terrace sought out a solution that could provide visibility into how data was being entered and detect risk before applications are submitted. The solution: Behavioral analytics from NeuroID via Experian After integrating NeuroID through Experian’s orchestration platform, Terrace gained access to real-time behavioral signals that detected fraud before data was even submitted. Just hours after Terrace turned NeuroID on, behavioral signals revealed a major attack in progress — NeuroID enabled Terrace to respond faster than ever and reduce risk immediately. “Going live was my most nerve-wracking day. We knew we would see data that we have never seen before and sure enough, we were right in the middle of an attack,” Hopkins said. “We thought the fraud was a little more generic and a little more spread out. What we found was much more coordinated activities, but this also meant we could bring more surgical solutions to the problem instead of broad strokes.” Terrace has seen significant results with NeuroID in place, including: Together, NeuroID and Experian enabled Terrace to build a layered, intelligent fraud defense that adapts in real time. A partnership built on innovation Terrace Finance’s success is a testament to what is  possible when forward-thinking companies partner with innovative technology providers. With Experian’s fraud analytics and NeuroID’s behavioral intelligence, they have built a fraud prevention strategy that is proactive, precise, and scalable. And they are not stopping there. Terrace is now working with Experian to explore additional tools and insights across the ecosystem, continuing to refine their fraud defenses and deliver the best possible experience for genuine users. “We use the analogy of a stream,” Hopkins explained. “Rocks block the flow, and as you remove them, it flows better. But that means smaller rocks are now exposed. We can repeat these improvements until the water flows smoothly.” Learn more about Terrace Finance and NeuroID Want more of the story? Read the full case study to explore how behavioral analytics provided immediate and long-term value to Terrace Finance’s innovative fraud prevention strategy. Read case study

by Allison Lemaster 4 min read September 3, 2025