With 2020 firmly behind us and multiple COVID-19 vaccines being dispersed across the globe, many of us are entering 2021 with a bit of, dare we say it, optimism. But with consumer spending and consumer confidence dipping at the end of the year, along with an inversely proportional spike in coronavirus cases, it’s apparent there’s still some uncertainty to come. This leaves businesses and consumers alike, along with fintechs and their peer financial institutions, wondering when the world’s largest economy will truly rebound. But based on the most recent numbers available from Experian, fintechs have many reasons to be bullish. In this unprecedented year, marked by a global pandemic and a number of economic and personal challenges for both businesses and consumers, Americans are maintaining healthy credit profiles and responsible spending habits. While growth expectedly slowed towards the end of the year, Q4 of 2020 saw solid job gains in the US labor market, with 883,000 jobs added through November and the US unemployment rate falling to 6.7%. Promisingly, one of the sectors hit hardest by the pandemic, the leisure and hospitality industry added back the most jobs of all sectors in October: 271,000. Additionally, US home sales hit a 14-year high fueled by record low mortgage rates. And finally, consumer sentiment rose to the highest level (81.4%) since March 2020. Not only are these promising signs of continued recovery, they illustrate there are ample market opportunities now for fintechs and other financial institutions. “It’s been encouraging to see many of our fintech partners getting back to their pre-COVID marketing levels,” said Experian Account Executive for Fintech Neil Conway. “Perhaps more promising, these fintechs are telling me that not only are response rates up but so is the credit quality of those applicants,” he said. More plainly, if your company isn’t in the market now, you’re missing out. Here are the four steps fintechs should take to reenter the lending marketing intelligently, while mitigating as much risk as possible. Re-do Your Portfolio Review Periodic portfolio reviews are standard practice for financial institutions. But the health crisis has posted unique challenges that necessitate increased focus on the health and performance of your credit portfolio. If you haven’t done so already, doing an analysis of your current lending portfolio is imperative to ensure you are minimizing risk and maximizing profitability. It’s important to understand if your portfolio is overexposed to customers in a particularly hard-hit industry, i.e. entertainment, or bars and restaurants. At the account level there may be opportunities to reevaluate customers based on a different risk appetite or credit criteria and a portfolio review will help identify which of your customers could benefit from second chance opportunities they may not have otherwise been able to receive. Retool Your Data, Analytics and Models As the pandemic has raged on, fintechs have realized many of the traditional data inputs that informed credit models and underwriting may not be giving the complete picture of a consumer. Essentially, a 720 in June 2020 may not mean the same as it does today and forbearance periods have made payment history and delinquency less predictive of future ability to pay. To stay competitive, fintechs must make sure they have access to the freshest, most predictive data. This means adding alternative data and attributes to your data-driven decisioning strategies as much as possible. Alternative data, like income and employment data, works to enhance your ability to see a consumer’s entire credit portfolio, which gives lenders the confidence to continue to lend – as well as the ability to track and monitor a consumer’s historical performance (which is a good indicator of whether or not a consumer has both the intention and ability to repay a loan). Re-Model Your Lending Criteria One of the many things the global health crisis has affirmed is the ongoing need for the freshest, most predictable data inputs. But even with the right data, analytics can still be tedious, prolonging deployment when time is of the essence. Traditional models are too slow to develop and deploy, and they underperform during sudden economic upheavals. To stay ahead in times recovery or growth, fintechs need high-quality analytics models, running on large and varied data sets that they can deploy quickly and decisively. Unlike many banks and traditional financial institutions, fintechs are positioned to nimbly take advantage of market opportunities. Once your models are performing well, they should be deployed into the market to actualize on credit-worthy current and future borrowers. Advertising/Prescreening for Intentional Acquisition As fintechs look to re-enter the market or ramp up their prescreen volumes to pre-COVID levels, it’s imperative to reach the right prospects, with the right offer, based on where and how they’re browsing. More consumers than ever are relying on their phones for browsing and mobile banking, but aligning messaging and offers across devices and platforms is still important. Here’s where data-driven advertising becomes imperative to create a more relevant experience for consumers, while protecting privacy. As 2021 rolls forward, there will be ample chance for fintechs to capitalize on new market opportunities. Through up-to-date analysis of your portfolio, ensuring you have the freshest, predictive data, adjusting your lending criteria and tweaking your approach to advertising and prescreen, you can be ready for the opportunities brought on by the economic recovery. How is your fintech gearing up to re-enter the market? Learn more
Financial institutions preparing for the launch of the Financial Accounting Standard Board’s (FASB) new current expected credit loss model, or CECL, may have concerns when it comes to preparedness, implications and overall impact. Gavin Harding, Experian’s Senior Business Consultant and Jose Tagunicar, Director of Product Management, tackled some of the tough questions posed by the new accounting standard. Check out what they had to say: Q: How can financial institutions begin the CECL transition process? JT: To prepare for the CECL transition process, companies should conduct an operational readiness review, which includes: Analyzing your data for existing gaps. Determining important milestones and preparing for implementation with a detailed roadmap. Running different loss methods to compare results. Once losses are calculated, you’ll want to select the best methodology based on your portfolio. Q: What is required to comply with CECL? GH: Complying with CECL may require financial institutions to gather, store and calculate more data than before. To satisfy CECL requirements, financial institutions will need to focus on end-to-end management, determine estimation approaches that will produce reasonable and supportable forecasts and automate their technology and platforms. Additionally, well-documented CECL estimations will require integrated workflows and incremental governance. Q: What should organizations look for in a partner that assists in measuring expected credit losses under CECL? GH: It’s expected that many financial institutions will use third-party vendors to help them implement CECL. Third-party solutions can help institutions prepare for the organization and operation implications by developing an effective data strategy plan and quantifying the impact of various forecasted conditions. The right third-party partner will deliver an integrated framework that empowers clients to optimize their data, enhance their modeling expertise and ensure policies and procedures supporting model governance are regulatory compliant. Q: What is CECL’s impact on financial institutions? How does the impact for credit unions/smaller lenders differ (if at all)? GH: CECL will have a significant effect on financial institutions’ accounting, modeling and forecasting. It also heavily impacts their allowance for credit losses and financial statements. Financial institutions must educate their investors and shareholders about how CECL-driven disclosure and reporting changes could potentially alter their bottom line. CECL’s requirements entail data that most credit unions and smaller lenders haven’t been actively storing and saving, leaving them with historical data that may not have been recorded or will be inaccessible when it’s needed for a CECL calculation. Q: How can Experian help with CECL compliance? JT: At Experian, we have one simple goal in mind when it comes to CECL compliance: how can we make it easier for our clients? Our Ascend CECL ForecasterTM, in partnership with Oliver Wyman, allows our clients to create CECL forecasts in a fraction of the time it normally takes, using a simple, configurable application that accurately predicts expected losses. The Ascend CECL Forecaster enables you to: Fulfill data requirements: We don’t ask you to gather, prepare or submit any data. The application is comprised of Experian’s extensive historical data, delivered via the Ascend Technology PlatformTM, economic data from Oxford Economics, as well as the auto and home valuation data needed to generate CECL forecasts for each unsecured and secured lending product in your portfolio. Leverage innovative technology: The application uses advanced machine learning models built on 15 years of industry-leading credit data using high-quality Oliver Wyman loan level models. Simplify processes: One of the biggest challenges our clients face is the amount of time and analytical effort it takes to create one CECL forecast, much less several that can be compared for optimal results. With the Ascend CECL Forecaster, creating a forecast is a simple process that can be delivered quickly and accurately. Q: What are immediate next steps? JT: As mentioned, complying with CECL may require you to gather, store and calculate more data than before. Therefore, it’s important that companies act now to better prepare. Immediate next steps include: Establishing your loss forecast methodology: CECL will require a new methodology, making it essential to take advantage of advanced statistical techniques and third-party solutions. Making additional reserves available: It’s imperative to understand how CECL impacts both revenue and profit. According to some estimates, banks will need to increase their reserves by up to 50% to comply with CECL requirements. Preparing your board and investors: Make sure key stakeholders are aware of the potential costs and profit impacts that these changes will have on your bottom line. Speak with an expert
Machine learning (ML), the newest buzzword, has swept into the lexicon and captured the interest of us all. Its recent, widespread popularity has stemmed mainly from the consumer perspective. Whether it’s virtual assistants, self-driving cars or romantic matchmaking, ML has rapidly positioned itself into the mainstream. Though ML may appear to be a new technology, its use in commercial applications has been around for some time. In fact, many of the data scientists and statisticians at Experian are considered pioneers in the field of ML, going back decades. Our team has developed numerous products and processes leveraging ML, from our world-class consumer fraud and ID protection to producing credit data products like our Trended 3DTM attributes. In fact, we were just highlighted in the Wall Street Journal for how we’re using machine learning to improve our internal IT performance. ML’s ability to consume vast amounts of data to uncover patterns and deliver results that are not humanly possible otherwise is what makes it unique and applicable to so many fields. This predictive power has now sparked interest in the credit risk industry. Unlike fraud detection, where ML is well-established and used extensively, credit risk modeling has until recently taken a cautionary approach to adopting newer ML algorithms. Because of regulatory scrutiny and perceived lack of transparency, ML hasn’t experienced the broad acceptance as some of credit risk modeling’s more utilized applications. When it comes to credit risk models, delivering the most predictive score is not the only consideration for a model’s viability. Modelers must be able to explain and detail the model’s logic, or its “thought process,” for calculating the final score. This means taking steps to ensure the model’s compliance with the Equal Credit Opportunity Act, which forbids discriminatory lending practices. Federal laws also require adverse action responses to be sent by the lender if a consumer’s credit application has been declined. This requires the model must be able to highlight the top reasons for a less than optimal score. And so, while ML may be able to deliver the best predictive accuracy, its ability to explain how the results are generated has always been a concern. ML has been stigmatized as a “black box,” where data mysteriously gets transformed into the final predictions without a clear explanation of how. However, this is changing. Depending on the ML algorithm applied to credit risk modeling, we’ve found risk models can offer the same transparency as more traditional methods such as logistic regression. For example, gradient boosting machines (GBMs) are designed as a predictive model built from a sequence of several decision tree submodels. The very nature of GBMs’ decision tree design allows statisticians to explain the logic behind the model’s predictive behavior. We believe model governance teams and regulators in the United States may become comfortable with this approach more quickly than with deep learning or neural network algorithms. Since GBMs are represented as sets of decision trees that can be explained, while neural networks are represented as long sets of cryptic numbers that are much harder to document, manage and understand. In future blog posts, we’ll discuss the GBM algorithm in more detail and how we’re using its predictability and transparency to maximize credit risk decisioning for our clients.
You just finished redeveloping an existing scorecard, and now it’s time to replace the old with the new. If not properly planned, switching from one scorecard to another within a decisioning or scoring system can be disruptive. Once a scorecard has been redeveloped, it’s important to measure the impact of changes within the strategy as a result of replacing the old model with the new one. Evaluating such changes and modifying the strategy where needed will not only optimize strategy performance, but also maximize the full value of the newly redeveloped model. Such an impact assessment can be completed with a swap set analysis. The phrase swap set refers to “swapping out” a set of customer accounts — generally bad accounts — and replacing them with, or “swapping in,” a set of good customer accounts. Swap-ins are the customer population segment you didn’t previously approve under the old model but would approve with the new model. Swap-outs are the customer population segment you previously approved with the old model but wouldn’t approve with the new model. A worthy objective is to replace bad accounts with good accounts, thereby reducing the overall bad rate. However, different approaches can be used when evaluating swap sets to optimize your strategy and keep: The same overall bad rate while increasing the approval rate. The same approval rate while lowering the bad rate. The same approval and bad rate but increase the customer activation or customer response rates. It’s also important to assess the population that doesn’t change — the population that would be approved or declined using either the old or new model. The following chart highlights the three customer segments within a swap set analysis. With the incumbent model, the bad rate is 8.3%. With the new model, however, the bad rate is 4.9%. This is a reduction in the bad rate of 3.4 percentage points or a 41% improvement in the bad rate. This type of planning also is beneficial when replacing a generic model with another or a custom-developed model. Click here to learn more about how the Experian Decision Analytics team can help you manage the impacts of migrating from a legacy model to a newly developed model.
By: Kari Michel Credit risk models are used by almost every lender, and there are many choices to choose from including custom or generic models. With so many choices how do you know what is best for your portfolio? Custom models provide the strongest risk prediction and are developed using an organization’s own data. For many organizations, custom models may not be an option due to the size of the portfolio (may be too small), lack of data including not enough bads, time constraints, and/or lack of resources. If a custom model is not an option for your organization, generic bureau scoring models are a very powerful alternative for predicting risk. But how can you understand if your current scoring model is the best option for you? You may be using a generic model today and you hear about a new generic model, for example the VantageScore® credit score. How do you determine if the new model is more predictive than your current model for your portfolio? The best way to understand if the new model is more predictive is to do a head-to-head comparison – a validation. A validation requires a sample of accounts from your portfolio including performance flags. An archive is pulled from the credit reporting agency and both scores are calculated from the same time period and a performance chart is created to show the comparison. There are two key performance metrics that are used to determine the strength of the model. The KS (Komogorov-Smirnov) is a statistical term that measures the maximum difference between the bad and good cumulative score distribution. The KS range is from 0% to 100%, with the higher the KS the stronger the model. The second measurement uses the bad capture rate in the bottom 5%, 10% or 15% of the score range. A stronger model will provide better risk prediction and allow an organization to make better risk decisions. Overall, when stronger scoring models are used, organizations will be best prepared to decrease their bad rates and have a more profitable portfolio.
By: Tracy Bremmer In our last blog (July 30), we covered the first three stages of model development which are necessary whether developing a custom or generic model. We will now discuss the next three stages, beginning with the “baking” stage: scorecard development. Scorecard development begins as segmentation analysis is taking place and any reject inference (if needed) is put into place. Considerations for scorecard development are whether the model will be binned (divides predictive attributes into intervals) or continuous (variable is modeled in its entirety), how to account for missing values (or “false zeros”), how to evaluate the validation sample (hold-out sample vs. an out-of-time sample), avoidance of over-fitting the model, and finally what statistics will be used to measure scorecard performance (KS, Gini coefficient, divergence, etc.). Many times lenders assume that once the scorecard is developed, the work is done. However, the remaining two steps are critical to development and application of a predictive model: implementation/documentation and scorecard monitoring. Neglecting these two steps is like baking a cake but never taking a bite to make sure it tastes good. Implementation and documentation is the last stage in developing a model that can be put to use for enhanced decisioning. Where the model will be implemented will determine the timeliness and complexity for when the models can be put into practice. Models can be developed in an in-house system, a third-party processor, a credit reporting agency, etc. Accurate documentation outlining the specifications of the model will be critical for successful implementation and model audits. Scorecard monitoring will need to be put into place once the model is developed, implemented and put into use. Scorecard monitoring evaluates population stability, scorecard performance, and decision management to ensure that the model is performing as expected over the course of time. If at any time there are variations based on initial expectations, then scorecard monitoring allows for immediate modifications to strategies. With all the right ingredients, the right approach, and the checks and balances in place, your model development process has the potential to come out “just right!”
By: Tracy Bremmer Preheat the oven to 350 degrees. Grease the bottom of your pan. Mix all of your ingredients until combined. Pour mixture into pan and bake for 35 minutes. Cool before serving. Model development, whether it is a custom or generic model, is much like baking. You need to conduct your preparatory stages (project design), collect all of your ingredients (data), mix appropriately (analysis), bake (development), prepare for consumption (implementation and documentation) and enjoy (monitor)! This blog will cover the first three steps in creating your model! Project design involves meetings with the business users and model developers to thoroughly investigate what kind of scoring system is needed for enhanced decision strategies. Is it a credit risk score, bankruptcy score, response score, etc.? Will the model be used for front-end acquisition, account management, collections or fraud? Data collection and preparation evaluates what data sources are available and how best to incorporate these data elements within the model build process. Dependent variables (what you are trying to predict) and the type of independent variables (predictive attributes) to incorporate must be defined. Attribute standardization (leveling) and attribute auditing occur at this point. The final step before a model can be built is to define your sample selection. Segmentation analysis provides the analytical basis to determine the optimal population splits for a suite of models to maximize the predictive power of the overall scoring system. Segmentation helps determine the degree to which multiple scores built on an individual population can provide lift over building just one single score. Join us for our next blog where we will cover the next three stages of model development: scorecard development; implementation/documentation; and scorecard monitoring.