By: Joel Pruis Some of you may be thinking finally we get to the meat of the matter. Yes the decision strategies are extremely important when we talk about small business/business banking. Just remember how we got to here though, we had to first define: Who are we going to pursue in this market segment? How are we going to pursue this market segment - part 1 & part 2? What are we going to require of the applicants to request the funds? Without the above, we can create all the decision strategies we want but their ultimate effectiveness will be severely limited as they will not have a foundation based upon a successful execution. First we are going to lay the foundation for how we are going to create the decision strategy. The next blog post (yes, there is one more!) will get into some more specifics. With that said, it is still important that we go through the basics of establishing the decision strategy. These are not the same as investments. Decision strategies based upon scorecards We will not post the same disclosure as do the financial reporting of public corporations or investment solicitations. This is the standard disclosure of “past performance is not an indication of future results”. On the contrary, for scorecards, past performance is an indication of future results. Scorecards are saying that if all conditions remain the same, future results should follow past performance. This is the key. We need to fully understand what the expected results are to be for the portfolio originated using the scorecard. Therefore we need to understand the population of applications used to develop the scorecards, basically the information that we had available to generate the scorecard. This will tie directly with the information that we required of the applications to be submitted. As we understand the type of applications that we are taking from our client base we can start to understand some expected results. By analyzing what we have processed in the past we can start to build about model for the expected results going forward. Learn from the past and try not to repeat the mistakes we made. First we take a look at what we did approve and analyze the resulting performance of the portfolio. It is important to remember that we are not to be looking for the ultimate crystal ball rather a model that can work well to predict performance over the next 12 to 18 months. Those delinquencies and losses that take place 24, 36, 48 months later should not and cannot be tied back to the information that was available at the time we originated the credit. We will talk about how to refresh the score and risk assessment in a later blog on portfolio management. As we see what was approved and demonstrated acceptable performance we can now look back at those applications we processed and see if any applications that fit the acceptable profile were actually declined. If so, what were the reasons for the declinations? Do these reasons conflict with our findings based upon portfolio performance? If so, we may have found some additional volume of acceptable loans. I say "may" because statistics by themselves do not tell the whole story, so be cautious of blindly following the statistical data. My statistics professor in college drilled into us the principle of "correlation does not mean causation". Remember that the next time a study featured on the news. The correlation may be interesting but it does not necessarily mean that those factors "caused" the result. Just as important, challenge the results but don't use outliers to disprove here results or the effectiveness of the models. Once we have created the model and applied it to our typical application population we can now come up with some key metrics that we need to manage our decision strategies: Expected score distributions of the applications Expected approval percentage Expected override percentage Expected performance over the next 12-18 months Expected score distributions We build the models based upon what we expect to be the population of applications we process going forward. While we may target market certain segments we cannot control the walk-in traffic, the referral volume or the businesses that will ultimately respond to our marketing efforts. Therefore we consider the normal application distribution and its characteristics such as 1) score; 2) industry; 3) length of time in business; 4) sales size; etc. The importance of understanding and measuring the application/score distributions is demonstrated in the next few items. Expected approval percentages First we need to consider the approval percentages as an indication of what percent of the business market to which we are extending credit. Assuming we have a good representative sample of the business population in the applications we are processing we need to determine what percentile of businesses will be our targeted market. Did our analysis show that we can accept the top 40%? 50%? Whatever the percentage, it is important that we continue to monitor our approval percentage to determine if we are starting to get too conservative or too liberal in our decisioning. I typically counsel my client that “just because your approval percentage is going up is not necessarily an improvement!” By itself an increase in approval percentage is not good. I'm not saying that it is bad just that when it goes up (or down!) you need to explain why. Was there a targeted marketing effort? Did you run into a short term lucky streak? OR is it time to reassess the decision model and tighten up a bit? Think about what happens in an economic expansion. More businesses are surviving (note I said surviving not succeeding). Are more businesses meeting your minimum criteria? Has the overall population shifted up? If more businesses are qualifying but there has been no change in the industries targeted, we may need to increase our thresholds to maintain our targeted 50% of the market. Just because they met the standard criteria in the expansion does not mean they will survive in a recession. "But Joel, the recession might be more than 18 months away so we have a good client for at least 18 months, don't we?". I agree but we have to remember that we built the model assuming all things remain constant. Therefore if we are confident that the expansion will continue at the same pace infinitum, then go ahead and live with the increased approval percentage. I will challenge you that it is those applicants that "squeaked by" during the expansion that will be the largest portion of the losses when the recession comes. I will also look to investigate the approval percentages when they go down. Yes you can make the same claim that the scorecard is saying that the risk is too great over the next 12-18 months but again I will challenge that if we continue to provide credit to the top 40-50% of all businesses we are likely doing business with those clients that will survive and succeed when the expansion returns. Again, do the analysis of “why” the approval percentage declined/dropped. Expected override percentage While the approval percentage may fluctuate or stay the same, another area to be reviewed is that of the override. Overrides can be score overrides or a decision override. Score override would be contradicting the decision that was recommended based upon the score and/or overall decision strategy. Decision override would be when the market/field has approval authority and overturns the decision made by the central underwriting group. Consequently you can have a score override, a decision override or both. Overrides can be an explanation for the change in approval percentages. While we anticipate a certain degree of overrides (say around 5%), should the overrides become too significant we start to lose control of the expected outcomes of the portfolio performance. As such we need to determine why the overrides have increase (or potentially decrease) and the overrides impact on the approval percentage. We will address some specifics around override management in a later blog. Suffice to say, overrides will always be present but we need to keep the amount of overrides within tolerances to be sure we can accurate assess future performance. Expected performance over next 12-18 months The measure of expected performance is at minimum the expected probability/propensity of repayment. This may be labeled as the bad rate or the probability of default (PD). In a nutshell it is the probability that the credit facility will be a certain level of delinquency over the next 12-18 months. What the base level expected performance based upon score is not the expected “loss” on the account. That is a combination of the probability of default combined with the expected loss given event of default. For the purpose of this post we are talking about the probability of default and not the loss given event of default. For reinforcement we are simply talking about the percentage of accounts that go 30 or 60 or 90 days past due during the 12 – 18 months after origination. So bottom line, if we maintain a score distribution of the applications processed by the financial institution, maintain the approval percentage as well as the override percentages we should be able to accurately assess the future performance of the newly originated portfolio. Coming up next… A more tactical discussion of the decision strategy
Experian® QAS®, a leading provider of address verification software and services, recently released a new benchmark report on the data quality practices of top online retailers. The report revealed that 72 percent of the top 100 retailers are using some form of address verification during online checkout. This third annual benchmark report enables retailers to compare their online verification practices to those of industry leaders and provides tips for accurately capturing email addresses, a continuously growing data point for retailers. To find out how online retailers are utilizing contact data verification, download the complimentary report 2012 Address Verification Benchmark Report: The Top 100 Online Retailers. Source: Press release: Experian QAS Study Reveals Prevalence of Real-Time Address Verification Increasing Among Top Online Retailers.
A recent Experian credit trends study showcases the types of debts Americans have, the amounts they owe and the differences between generations. Nationally, the average debt in the United States is $78,030 and the average VantageScore® credit score is 751. The debt and VantageScore® credit score distribution for each group is listed below, with the 30 to 46 age group carrying the most debt and the youngest age group (19 to 29) carrying the least: Age group Average debt Average VantageScore® credit score 66 and older $38,043 829 47 to 65 $101,951 782 30 to 46 $111,121 718 19 to 29 $34,765 672 Get your VantageScore® credit score here Source: To view the complete study, please click here VantageScore® is owned by VantageScore Solutions, LLC
In Q3 2011, $143 billion – or nearly 44 percent of the $327 billion in new mortgage originations – was generated by VantageScore® A tier consumers. This represents an increase of 35 percent for VantageScore A tier consumers when compared with originations for the quarter before ($106 billion, or 39 percent of total originations). Watch Experian's Webinar for a detailed look at the current state of strategic default in mortgage and an update on consumer credit trends from the Q4 2011 Experian-Oliver Wyman Market Intelligence Reports Source: Experian-Oliver Wyman Market Intelligence Reports. VantageScore® is owned by VantageScore Solutions, LLC.
In my last two posts on bankcard and auto originations, I provided evidence as to why lenders have reason to feel optimistic about their growth prospects in 2012. With real estate lending however, the recovery, or lack thereof looks like it may continue to struggle throughout the year. At first glance, it would appear that the stars have aligned for a real estate turnaround. Interest rates are at or near all-time lows, housing prices are at post-bubble lows and people are going back to work with the unemployment rate at a 3-year low just above 8%. However, mortgage originations and HELOC limits were at $327B and $20B for Q3 2011, respectively. Admittedly not all-time quarterly lows, but well off levels of just a couple years ago. And according to the Mortgage Bankers Association, 65% of the mortgage volume was from refinance activity. So why the lull in real estate originations? Ironically, the same reasons I just mentioned that should drive a recovery. Low interest rates – That is, for those that qualify. The most creditworthy, VantageScore® credit score A and B consumers made up nearly 77% of the $327B mortgage volume and 87% of the $20B HELOC volume in Q3 2011. While continuing to clean up their portfolios, lenders are adjusting their risk exposure accordingly. Housing prices at multi-year lows - According to the S&P Case Shiller index, housing prices were 4% lower at the end of 2011 when compared to the end of 2010 and at the lowest level since the real estate bubble. Previous to this report, many thought housing prices had stabilized, but the excess inventory of distressed properties continues to drive down prices, keeping potential buyers on the sidelines. Unemployment rate at 3-year low – Sure, 8.3% sounds good now when you consider we were near 10% throughout 2010. But this is a far cry from the 4-5% rate we experienced just five years ago. Many consumers continue to struggle, affecting their ability to make good on their debt obligations, including their mortgage (see “Housing prices at multi-year lows” above), in turn affecting their credit status (see “Low interest rates” above)… you get the picture. Ironic or not, the good news is that these forces will be the same ones to drive the turnaround in real estate originations. Interest rates are projected to remain low for the foreseeable future, foreclosures and distressed inventory will eventually clear out and the unemployment rate is headed in the right direction. The only missing ingredient to make these variables transform from the hurdle to the growth factor is time.
While retail card utilization rates decreased slightly in Q3 2011, retail card delinquency rates increased for all performance bands (30-59, 60-89 and 90-180 days past due) in Q3 2011 after reaching multiyear lows the previous quarter. Listen to our recent Webinar on consumer credit trends and retail spending. Source: Experian-Oliver Wyman Market Intelligence Reports
Experian's recently released study on the credit card and mortgage payment behaviors* of consumers both nationally and in the top 30 Metropolitan Statistical Areas yielded interesting findings. Nationally, since 2007, 20 percent fewer credit card payments are 60 days late, but 25 percent more consumers are paying their mortgage 60 days late. The cities that showed the most improvements to bankcard payments include Cleveland, Ohio; San Antonio, Texas; Cincinnati, Ohio; Dallas, Texas; and Houston, Texas. Cities that have made the least improvements to their credit card payments include Riverside, Calif.; Seattle, Wash.; Tampa, Fla.; Phoenix, Ariz.; and Miami, Fla. Additionally, the data shows only four cities that improved in making mortgage payments: Cleveland, Ohio; Minneapolis, Minn.; Denver, Colo.; and Detroit, Mich. *All payment data is based on 60-day delinquencies. Learn more about managing credit.
Organizations approach agency management from three perspectives: (1) the need to audit vendors to ensure that they are meeting contractual, financial and legal compliance requirements; (2) ensure that the organization’s clients are being treated fairly and ethically in order to limit brand reputation risk and maintain a customer-centric commitment; (3) maximize revenue opportunities through collection of write-offs through successful performance management of the vendor. Larger organizations manage this process often by embedding an agency manager into the vendor’s site, notably on early out / pre charge-off outsourcing projects. As many utilities leverage the services of outsourcers for managing pre-final bill collections, this becomes an important tool in managing quality and driving performance. The objective is to build a brand presence in the outsourcer’s site, and focusing its employees and management team on your customers and daily performance metrics and outcomes. This is particularly useful in vendor locations in which there are a number of high profile client projects with larger resource pools competing for attention and performance, as an embedded manager can ensure that the brand gets the right level of attention and focus. For post write off recovery collections in utility companies, embedding an agency manager becomes cost-prohibitive and less of an opportunity from an ROI perspective, due to the smaller inventories of receivables at any agency. We urge that clients not spread out their placements to many vendors where each project is potentially small, as the vendors will more likely focus on larger client projects and dilute the performance on your receivables. Still, creating a smaller pool of agency partners often does not provide a resource pool of >50-100 collectors at a vendor location to warrant an embedded agency management approach. Even without an embedded agency manager, organizations can use some of the techniques that are often used by onsite managers to ensure that the focus is on their projects, and maintain an ongoing quality review and performance management process. The tools are fairly common in today’s environment --- remote monitoring and quality reviews of customer contacts (i.e., digital logging), monthly publishing of competitive liquidation results to a competitive agency process with market share incentives, weekly updates of month-to-date competitive results to each vendor to promote competition, periodic “special” promotions / contests tied to performance where below target MTD, and monthly performance “kickers” for exceeding monthly liquidation targets at certain pre-determined levels. Agencies have selective memory, and so it’s vital to keep your projects on their radar. Remember, they have many more clients, all of whom want the same thing – performance. Some are less vocal and focused on results than others. Those that are always providing competitive feedback, quality reviews and feedback, contests, and market share opportunities are top of mind, and generally get the better selection of collectors, team /project managers, and overall vendor attention. The key is to maintain constant visibility and a competitive atmosphere. Over the next several weeks, we'll dive into more detail for each of these areas: Auditing and monitoring, onsite and remote Best practices for improving agency performance Scorecards and strategies Market share competition and scorecards
Findings from the Q2 Experian Business Benchmark Report showed that the amount of delinquent debt has increased significantly for the largest and smallest businesses. Very large businesses (those with more than 1,000 employees) had the greatest shift in percentage of dollars delinquent, shifting from 11.6 percent in June 2010 to 18.2 percent in June 2011, and very small businesses (those with one to four employees) had the greatest shift in percentage of dollars considered severely delinquent, increasing from 9.9 percent to 11.7 percent year over year. Conversely, the Q2 report indicated that mid-size businesses (those with 100 to 249 employees) have shown the greatest improvement in percentage of dollars delinquent and severely delinquent, reducing their debt by as much as 7.3 percent and 35.8 percent, respectively, year over year. Download previous reports and view a visual representation of this data broken down by state in an interactive map. Source: Download the current Business Benchmark Report
Customers see a data breach and the loss of their personal data as a threat to their security and finances, and with good reason. Identity theft occurs every four seconds in the United States, according to figures from the Federal Trade Commission. As consumers become savvier about protecting their personal data, they expect companies to do the same. And to go the extra mile for them if a data breach occurs. That means providing protection through extended fraud resolution that holds up under scrutiny. Protection that offers peace of mind, not just in the interim but years down the line. The stronger the level of protection you provide to individuals affected in a breach, the stronger their brand loyalty. Just like with any product, consumers can tell the difference between valid protection products that work and ones that just don’t. Experian® Data Breach Resolution takes care to provide the former, protection that works for your customers or employees affected in a breach and that reflects positively on you, as the company providing the protection. Experian’s ProtectMyID® Elite or ProtectMyID Alert provides industry-leading identity protection and, now, extended fraud resolution care. ExtendCARE™ now comes standard with every ProtectMyID data breach redemption membership, at no additional cost to you or the member. With ExtendCARE, the identity theft resolution portion of ProtectMyID remains active even when the full membership isn’t. ExtendCARE allows members to receive personalized assistance, not just advice, from an Identity Theft Resolution Agent. This high level of assistance is available any time identity theft occurs after individuals redeem their ProtectMyID memberships. Extended fraud resolution from a global leader like Experian can put consumers’ minds at ease following a breach. If we can help you with pre-breach planning or data breach resolution, reach out to us via our contact form on our contact page.
If you attended any of our past credit trends Webinars, you’ve heard me mention time and again how auto originations have been a standout during these times when overall consumer lending has been a challenge. In fact, total originated auto volumes topped $100B in the third quarter of 2011, a level not seen since mid-2008. But is this growth sustainable? Since bottoming at the start of 2009, originations have been on a tear for nearly three straight years. Given that, you might think that auto origination’s best days are behind it. But these three key factors indicate originations may still have room to run: 1. The economy Just as it was a factor in declining auto originations during the recession, the economy will drive continued increases in auto sales. If originations were growing during the challenges of the past couple of years, the expected improvements in the economy in 2012 will surely spur new auto originations. 2. Current cars are old A recent study by Experian Automotive showed that today’s automobiles on the road have hit an all-time high of 10.6 years of age. Obviously a result of the recent recession, consumers owning older cars will result in pent up demand for newer and more reliable ones. 3. Auto lending is more diversified than ever I’m talking diversification in a couple of ways: Auto lending has always catered to a broader credit risk range than other products. In recent years, lenders have experimented with moving even further into the subprime space. For example, VantageScore® credit score D consumers now represent 24.4% of all originations vs. 21.2% at the start of 2009. There is a greater selection of lenders that cater to the auto space. With additional players like Captives, Credit Unions and even smaller Finance companies competing for new business, consumers have several options to secure a competitively-priced auto loan. With all three variables in motion, auto originations definitely have a formula for continued growth going forward. Come find out if auto originations do in fact continue to grow in 2012 by signing up for our upcoming Experian-Oliver Wyman credit trends Webinar.
Part II: Where are Models Most Needed Now in Mortgages? (Click here if you missed Part I of this post.) By: John Straka A first important question should always be are all of your models, model uses, and model testing strategies, and your non-model processes, sound and optimal for your business? But in today’s environment, two areas in mortgage stand out where better models and decision systems are most needed now: mortgage servicing and loan-quality assurance. I will discuss loan-quality assurance in a future installment. Mortgage servicing and loss mitigation are clearly one area where better models and new decision analytics continue to have a seemingly great potential today to add significant new value. At the risk of oversimplifying, it is possible that a number of the difficulties and frustrations of mortgage servicers (and regulators) and borrowers in recent years may have been lessened through more efficient automated decision tools and optimization strategies. And because these problems will continue to persist for quite some time, it is certainly not too late to envision and move now towards an improved future state of mortgage servicing, or to continue to advance your existing new strategic direction by adding to enhancements already underway. Much has been written about the difficulties faced by many mortgage servicers who have been overwhelmed by the demands of many more delinquent and defaulted borrowers and very extensive, evolving government involvements in new programs, performance incentives and standards. A strategic question on the minds of many executives and others in the industry today seems to be, where is all of this going? Is there a generally viable strategic direction for mortgage servicers that can help them to emerge from their current issues—perhaps similar to the improved data, standards, modeling, and technologies that allowed the mortgage industry in the 1990s to emerge overall quite successfully from the problems of the late 1980s and early 90s? To review briefly, mortgage industry problems of the early 1990s were less severe, of course—but really not dissimilar to the current environment. There had been a major home-price correction in California, in New England, and in a number of large metro areas elsewhere. A “low doc” mortgage era (and other issues) had left Citicorp nearly insolvent, for example, and caused other significant losses on top of the losses generated by the home prices. A major source of most mortgage funding, the Savings & Loan industry, had largely collapsed, with losses having to be resolved by a special government agency. Statistical mortgage credit scoring and automated underwriting resulted from the improved data, standards, modeling, and technologies that allowed the mortgage industry to recover in the 1990s, allowing mortgages to catch up with the previously established use of this decision technology in cards, autos, etc., thus benefiting the mortgage industry with reduced costs and significant gains in efficiency and risk management. An important question today is, is there a similar “renaissance,” so to speak, now in the offing or at hand for mortgage servicers? Despite all of the still ongoing problems? Let me offer here a very simple analogy—with a disclaimer that this is only a basic starting viewpoint, an oversimplification, recognizing that mortgage servicing and loss mitigation is extraordinarily complex in its details, and often seems only to grow more complex by the day (with added constraints and uncertainties piling on). The simple analogy is this: consider your loan-level Net Present Value (NPV) or other key objective of loan-level decisions in servicing and loss mitigation to be analogous to the statistically based mortgage default “Score” of automated underwriting for originations in the 1990s. Viewed in this way, a simple question stemming from the figure below is: can you reduce costs and satisfy borrowers and performance standards better by automating and focusing your servicing representatives more, or primarily, on the “Refer” group of borrowers? A corollary question is can more automated model-based decision engines confidently reduce the costs and achieve added insights and efficiencies in servicing the lowest and highest NPV delinquent borrowers and the Refer range? Another corollary question is, are new government-driven performance standards helpful or hindering (or even preventing) particular moves toward this type of objective. Is this a generally viable strategic direction for the future (or even the present) of mortgage servicing? Is it your direction today? What is your vision for the future of your quality mortgage servicing?
By: Joel Pruis One might consider this topic redundant to the last submission around application requirements and that assessment would be partially true. As such we are not going to go over the data that has already been collected in the application such as the demographic information of the applicant and guarantors or the business financial information or personal financial information. That discussion like Elvis has “left the building”. Rather, we will discuss the use of additional data to support the underwriting/decisioning process - namely: Personal/Consumer credit data Business data Scorecards Fraud data Let’s get a given out in the open. Personal credit data has a high correlation to the payment performance of a small business. The smaller the business the higher the correlation. “Your honor, counsel requests the above be stipulated in the court records.” “So stipulated for the record.” “Thank you, your honor.” With that put to rest (remember you can always comment on the blog if you have any questions or want to comment on any of the content). The real debate in small business lending revolves around the use of business data. Depth and availability of business data There are some challenges with the gathering and dissemination of business data for use in decisioning - mainly around the history of the data for the individual entity. More specifically, while a consumer is a single entity and for the vast majority of consumers, one does not bankrupt one entity and then start a new person to refresh their credit history. No, that is actually bankruptcy and the bankruptcy stays with the individual. Businesses, however, can and in fact do close one entity and start up another. Restaurants and general contractors come to mind as two examples of individuals who will start up a business, go bankrupt and then start another business under a new entity repeating the cycle multiple times. While this scenario is a challenge, one cannot refute the need to know how both the individual consumer as well as the individual business is handling its obligations whether they are credit cards, auto loans or trade payables. I once worked for a bank president in a small community bank who challenged me with the following mantra, “It’s not what you know that you don’t know that can hurt you, it is what you think you know but really don’t that hurts you the most.” I will admit that it took me a while to digest that statement when I first heard it. Once fully digested the statement was quite insightful. How many times do we think we know something when we really don’t? How many times do we act on an assumed understanding but find that our understanding was flawed? How sound was our decision when we had the flawed understanding? The same holds true as it relates to the use (or lack thereof) of business information. We assume that we don’t need business information because it will not tell us much as it relates to our underwriting. How can the business data be relevant to our underwriting when we know that the business performance is highly correlated to the performance of the owner? Let’s look at a study done a couple of years ago by the Business Information group at Experian. The data comes from a whitepaper titled “Predicting Risk: the relationship between business and consumer scores” and was published in 2008. The purpose of the study was to determine which goes bad first, the business or the owner. At a high level the data shows the following: If you're interested, you can download the full study here. So while a majority of time and without any additional segmentation, the business will show signs of stress before the owner. If we look at the data using length of time in business we see some additional insights. Figure: Distribution of businesses by years in business Interesting distinction is that based upon the age of the business we will see the owner going bad before the business if the business age is 5 years or less. Once we get beyond the 5 year point the “first bad” moves to the business. In either case, there is no clear case to be made to exclude one data source in favor of the other to predict risk in a small business origination process. While we can look at see that there is an overall majority where the business goes bad first or that if we have a young small business the owner will more likely go bad first, in either case, there is still a significant population where the inverse is true. Bottom line, gathering both the business and the consumer data allows the financial institution to make a better and more informed decision. In other words, it prevents us from the damage caused by “thinking we know something when we really don’t”. Coming up next month – Decisioning Strategies.
Part I: Types and Complexity of Models, and Unobservable or Omitted Variables or Relationships By: John Straka Since the financial crisis, it’s not unusual to read articles here and there about the “failure of models.” For example, a recent piece in Scientific American critiqued financial model “calibration,” proclaiming in its title, Why Economic Models are Always Wrong. In the mortgage business, for example, it is important to understand where models have continued to work, as well as where they failed, and what this all means for the future of your servicing and origination business. I also see examples of loose understanding about best practices in relation to the shortcomings of models that do work, and also about the comparative strengths and weaknesses of alternative judgmental decision processes. With their automation efficiencies, consistency, valuable added insights, and testability for reliability and robustness, statistical business models driven by extensive and growing data remain all around us today, and they are continuing to expand. So regardless of your views on the values and uses of models, it is important to have a clear view and sound strategies in model usage. A Categorization: Ten Types of Models Business models used by financial institutions can be placed in more than ten categories, of course, but here are ten prominent general types of models: Statistical credit scoring models (typically for default) Consumer- or borrower-response models Consumer- or borrower-characteristic prediction models Loss given default (LGD) and Exposure at default (EAD) models Optimization tools (these are not models, per se, but mathematical algorithms that often use inputs from models) Loss forecasting and simulation models and Value-at-risk (VAR) models Valuation, option pricing, and risk-based pricing models Profitability forecasting and enterprise-cash-flow projection models Macroeconomic forecasting models Financial-risk models that model complex financial instruments and interactions Types 8, 9 and 10, for example, are often built up from multiple component models, and for this reason and others, these model categories are not mutually exclusive. Types 1 through 3, for example, can also be built from individual-level data (typical) or group-level data. No categorical type listing of models is perfect, and this listing is also not intended to be completely exhaustive. The Strain of Complexity (or Model Ambition) The principle of Occam’s razor in model building, roughly translated, parallels the business dictum to “keep it simple, stupid.” Indeed, the general ordering of model types 1 through 10 above (you can quibble on the details) tends to correspond to growing complexity, or growing model ambition. Model types 1 and 2 typically forecast a rank-ordering, for example, rather than also forecasting a level. Credit scores and credit scoring typically seek to rank-order consumers in their default, loss, or other likelihoods, without attempting to project the actual level of default rates, for example, across the score distribution. Scoring models that add the dimension of level prediction increase this layer of complexity. In addition, model types 1 through 3 are generally unconditional predictors. They make no attempt to add the dimension of predicting the time path of the dependent variable. Predicting not just a consumer’s relative likelihood of an event over a future time period as a whole, for example, but also the event’s frequency level and time path of this level each year, quarter, or month, is a more complex and ambitious modeling endeavor. (This problem is generally approached through continuous or discrete hazard models.) While generalizations can be hazardous (exceptions can typically be found), it is generally true that, in the events leading up to and surrounding the financial crisis, greater model complexity and ambition was correlated with greater model failure. For example, at what is perhaps an extreme, Coval, Jurek, and Stafford (2009) have demonstrated how, for model type 10, even slight unexpected changes in default probabilities and correlations had a substantial impact on the expected payoffs and ratings of typical collateralized debt obligations (CDOs) with subprime residential mortgage-backed securities as their underlying assets. Nonlinear relationships in complex systems can generate extreme unreliability of system predictions. To a lesser but still significant degree, the mortgage- or housing-related models included or embedded in types 6 through 10 were heavily dependent on home-price projections and risk simulation, which caused significant “expected”-model failures after 2006. Home-price declines in 2007-2009 reached what had previously only been simulated as extreme and very unlikely stress paths. Despite this clear problem, given the inescapable large impact of home prices on any mortgage model or decision system (of any kind), it is generally acceptable to separate the failure of the home-price projection from any failure of the relative default and other model relationships built around the possible home-price paths. In other words, if a model of type 8, for example, predicted the actual profitability and enterprise cash flow quite well given the actual extreme path of home prices, then this model can be reasonably regarded as not having failed as a model per se, despite the clear, but inescapable reliance of the model’s level projections on the uncertain home-price outcomes. Models of type 1, statistical credit scoring models, generally continued to work well or reasonably well both in the years preceding and during the home-price meltdown and financial crisis. This is very largely due to these models’ relatively modest objective of simply rank-ordering risks, in general. To be sure, scoring models in mortgage, and more generally, were strongly impacted by the home price declines and unusual events of the bubble and subsequent recession, with deteriorated strength in risk separation. This can be seen, for example, in the recent VantageScore® credit score stress-test study, VantageScore® Stress Testing, which shows the lowest risk separation ability in the states with the worst home-price and unemployment outcomes (CA, AZ, FL, NV, MI). But these kinds of significant but comparatively modest magnitudes of deterioration were neither debilitating nor permanent for these models. In short, even in mortgage, scoring models generally held up pretty well, even through the crisis—not perfectly, but comparatively better than the more complex level-, system-, and path-prediction models. (see footnote 1) Scoring models have also relied more exclusively on microeconomic behavioral stabilities, rather than including macroeconomic risk modeling. Fortunately the microeconomic behavioral patterns have generally been much more stable. Weak-credit borrowers, for example, have long tended to default at significantly higher rates than strong credit borrowers—they did so preceding, and right through, the financial crisis, even as overall default levels changed dramatically; and they continue to do so today, in both strong and weak housing markets. (see footnote 2) As a general rule overall, the more complex and ambitious the model, the more complex are the many questions that have to be asked concerning what could go wrong in model risks. But relative complexity is certainly not the only type of model risk. Sometimes relative simplicity, otherwise typically desirable, can go in a wrong direction. Unobservable or Omitted Variables or Relationships No model can be perfect, for many reasons. Important determining variables may be unmeasured or unknown. Similarly, important parameters and relationships may differ significantly across different types of populations, and different time periods. How many models have been routinely “stress tested” on their robustness in handling different types of borrower populations (where unobserved variables tend to lurk) or different shifts in the mix of borrower sub-populations? This issue is more or less relevant depending on the business and statistical problem at hand, but overall, modeling practice has tended more often than not to neglect robustness testing (i.e., tests of validity and model power beyond validation samples). Several related examples from the last decade appeared in models that were used to help evaluate subprime loans. These models used generic credit scores together with LTV, and perhaps a few other variables (or not), to predict subprime mortgage default risks in the years preceding the market meltdown. This was a hazardous extension of relatively simple model structures that worked better for prime mortgages (but had also previously been extended there). Because, for example, the large majority of subprime borrowers had weak credit records, generic credit scores did not help nearly as much to separate risk. Detailed credit attributes, for example, were needed to help better predict the default risks in subprime. Many pre-crisis subprime models of this kind were thus simplified but overly so, as they began with important omitted variables. This was not the only omitted-variables problem in this case, and not the only problem. Other observable mortgage risk factors were oddly absent in some models. Unobserved credit risk factors also tend to be correlated with observed risk factors, creating greater volatility and unexplained levels of higher risk in observed higher-credit-risk populations. Traditional subprime mortgages also focused mainly on poor-credit borrowers who needed cashout refinancing for debt consolidation or some other purpose. Such borrowers, in shaky financial condition, were more vulnerable to economic shocks, but a debt consolidating cashout mortgage could put them in a better position, with lower total monthly debt payments that were tax deductible. So far, so good—but an omitted capacity-risk variable was the number of previous cashout refinancings done (which loan brokers were incented to “churn”). The housing bubble allowed weak-capacity borrowers to sustain themselves through more extracted home equity, until the music stopped. Rate and fee structures of many subprime loans further heightened capacity risks. A significant population shift also occurred when subprime mortgage lenders significantly raised their allowed LTVs and added many more shaky purchase-money borrowers last decade; previously targeted affordable-housing programs from the banks and conforming-loan space had instead generally required stronger credit histories and capacity. Significant shifts like this in any modeled population require very extensive model robustness testing and scrutiny. But instead, projected subprime-pool losses from the major purchasers of subprime loans, and the ratings agencies, went down in the years just prior to the home-price meltdown, not up (to levels well below those seen in widely available private-label subprime pool losses from 1990’s loans). Rules and Tradition in Lieu of Sound Modeling Interestingly, however, these errant subprime models were not models that came into use in lender underwriting and automated underwriting systems for subprime—the front-end suppliers of new loans for private-label subprime mortgage-backed securities. Unlike the conforming-loan space, where automated underwriting using statistical mortgage credit scoring models grew dramatically in the 1990s, underwriting in subprime, including automated underwriting, remained largely based on traditional rules. These rules were not bad at rank-ordering the default risks, as traditional classifications of subprime A-, B, C and D loans showed. However, the rules did not adapt well to changing borrower populations and growing home-price risks either. Generic credit scores improved for most subprime borrowers last decade as they were buoyed by the general housing boom and economic growth. As a result, subprime-lender-rated C and D loans largely disappeared and the A- risk classifications grew substantially. Moreover, in those few cases where statistical credit scoring models were estimated on subprime loans, they identified and separated the risks within subprime much better than the traditional underwriting rules. (I authored an invited article early last decade, which included a graph, p. 222, that demonstrated this, Journal of Housing Research.) But statistical credit scoring models were scarcely or never used in most subprime mortgage lending. In Part II, I’ll discuss where models are most needed now in mortgages. Footnotes: [1] While credit scoring models performed better than most others, modelers can certainly do more to improve and learn from the performance declines at the height of the home-price meltdown. Various approaches have been undertaken to seek such improvements. [2] Even strategic mortgage defaults, while comprising a relatively larger share of strong-credit borrower defaults, have not significantly changed the traditional rank-ordering, as strategic defaults occur across the credit spectrum (weaker credit histories include borrowers with high income and assets).
By: Staci Baker Just before the holidays, the Fed released proposed rules, which implement Sections 165 and 166 of the Dodd-Frank Act. According to The American Bankers Association, “The proposals cover such issues as risk-based capital requirements, leverage, resolution planning, concentration limits and the Fed’s plans to regulate large, interconnected financial institutions and nonbanks.” How will these rules affect you? One of the biggest concerns that I have been hearing from institutions is the affect that the proposed rules will have on profitability. Greater liquidity requirements, created by both the Dodd-Frank Act and Basel III Rules, put pressure on banks to re-evaluate which lending segments they will continue to participate in, as well as impact the funds available for lending to consumers. What are you doing to proactively combat this? Within the Dodd-Frank Act is the Durbin Amendment, which regulates the interchange fee an issuer can charge a consumer. As I noted in my prior blog detailing the fee cap associated with the Durbin Amendment, it’s clear that these new regulations in combination with previous rulings will continue to put downward pressures on bank profitability. With all of this to consider, how will banks modify their business models to maintain a healthy bottom line, while keeping customers happy? Over my next few blog posts, I will take a look at the Dodd-Frank Act’s affect on an institution’s profitability and highlight best practices to manage the impact to your organization.