Part I: Types and Complexity of Models, and Unobservable or Omitted Variables or Relationships By: John Straka Since the financial crisis, it’s not unusual to read articles here and there about the “failure of models.” For example, a recent piece in Scientific American critiqued financial model “calibration,” proclaiming in its title, Why Economic Models are Always Wrong. In the mortgage business, for example, it is important to understand where models have continued to work, as well as where they failed, and what this all means for the future of your servicing and origination business. I also see examples of loose understanding about best practices in relation to the shortcomings of models that do work, and also about the comparative strengths and weaknesses of alternative judgmental decision processes. With their automation efficiencies, consistency, valuable added insights, and testability for reliability and robustness, statistical business models driven by extensive and growing data remain all around us today, and they are continuing to expand. So regardless of your views on the values and uses of models, it is important to have a clear view and sound strategies in model usage. A Categorization: Ten Types of Models Business models used by financial institutions can be placed in more than ten categories, of course, but here are ten prominent general types of models: Statistical credit scoring models (typically for default) Consumer- or borrower-response models Consumer- or borrower-characteristic prediction models Loss given default (LGD) and Exposure at default (EAD) models Optimization tools (these are not models, per se, but mathematical algorithms that often use inputs from models) Loss forecasting and simulation models and Value-at-risk (VAR) models Valuation, option pricing, and risk-based pricing models Profitability forecasting and enterprise-cash-flow projection models Macroeconomic forecasting models Financial-risk models that model complex financial instruments and interactions Types 8, 9 and 10, for example, are often built up from multiple component models, and for this reason and others, these model categories are not mutually exclusive. Types 1 through 3, for example, can also be built from individual-level data (typical) or group-level data. No categorical type listing of models is perfect, and this listing is also not intended to be completely exhaustive. The Strain of Complexity (or Model Ambition) The principle of Occam’s razor in model building, roughly translated, parallels the business dictum to “keep it simple, stupid.” Indeed, the general ordering of model types 1 through 10 above (you can quibble on the details) tends to correspond to growing complexity, or growing model ambition. Model types 1 and 2 typically forecast a rank-ordering, for example, rather than also forecasting a level. Credit scores and credit scoring typically seek to rank-order consumers in their default, loss, or other likelihoods, without attempting to project the actual level of default rates, for example, across the score distribution. Scoring models that add the dimension of level prediction increase this layer of complexity. In addition, model types 1 through 3 are generally unconditional predictors. They make no attempt to add the dimension of predicting the time path of the dependent variable. Predicting not just a consumer’s relative likelihood of an event over a future time period as a whole, for example, but also the event’s frequency level and time path of this level each year, quarter, or month, is a more complex and ambitious modeling endeavor. (This problem is generally approached through continuous or discrete hazard models.) While generalizations can be hazardous (exceptions can typically be found), it is generally true that, in the events leading up to and surrounding the financial crisis, greater model complexity and ambition was correlated with greater model failure. For example, at what is perhaps an extreme, Coval, Jurek, and Stafford (2009) have demonstrated how, for model type 10, even slight unexpected changes in default probabilities and correlations had a substantial impact on the expected payoffs and ratings of typical collateralized debt obligations (CDOs) with subprime residential mortgage-backed securities as their underlying assets. Nonlinear relationships in complex systems can generate extreme unreliability of system predictions. To a lesser but still significant degree, the mortgage- or housing-related models included or embedded in types 6 through 10 were heavily dependent on home-price projections and risk simulation, which caused significant “expected”-model failures after 2006. Home-price declines in 2007-2009 reached what had previously only been simulated as extreme and very unlikely stress paths. Despite this clear problem, given the inescapable large impact of home prices on any mortgage model or decision system (of any kind), it is generally acceptable to separate the failure of the home-price projection from any failure of the relative default and other model relationships built around the possible home-price paths. In other words, if a model of type 8, for example, predicted the actual profitability and enterprise cash flow quite well given the actual extreme path of home prices, then this model can be reasonably regarded as not having failed as a model per se, despite the clear, but inescapable reliance of the model’s level projections on the uncertain home-price outcomes. Models of type 1, statistical credit scoring models, generally continued to work well or reasonably well both in the years preceding and during the home-price meltdown and financial crisis. This is very largely due to these models’ relatively modest objective of simply rank-ordering risks, in general. To be sure, scoring models in mortgage, and more generally, were strongly impacted by the home price declines and unusual events of the bubble and subsequent recession, with deteriorated strength in risk separation. This can be seen, for example, in the recent VantageScore® credit score stress-test study, VantageScore® Stress Testing, which shows the lowest risk separation ability in the states with the worst home-price and unemployment outcomes (CA, AZ, FL, NV, MI). But these kinds of significant but comparatively modest magnitudes of deterioration were neither debilitating nor permanent for these models. In short, even in mortgage, scoring models generally held up pretty well, even through the crisis—not perfectly, but comparatively better than the more complex level-, system-, and path-prediction models. (see footnote 1) Scoring models have also relied more exclusively on microeconomic behavioral stabilities, rather than including macroeconomic risk modeling. Fortunately the microeconomic behavioral patterns have generally been much more stable. Weak-credit borrowers, for example, have long tended to default at significantly higher rates than strong credit borrowers—they did so preceding, and right through, the financial crisis, even as overall default levels changed dramatically; and they continue to do so today, in both strong and weak housing markets. (see footnote 2) As a general rule overall, the more complex and ambitious the model, the more complex are the many questions that have to be asked concerning what could go wrong in model risks. But relative complexity is certainly not the only type of model risk. Sometimes relative simplicity, otherwise typically desirable, can go in a wrong direction. Unobservable or Omitted Variables or Relationships No model can be perfect, for many reasons. Important determining variables may be unmeasured or unknown. Similarly, important parameters and relationships may differ significantly across different types of populations, and different time periods. How many models have been routinely “stress tested” on their robustness in handling different types of borrower populations (where unobserved variables tend to lurk) or different shifts in the mix of borrower sub-populations? This issue is more or less relevant depending on the business and statistical problem at hand, but overall, modeling practice has tended more often than not to neglect robustness testing (i.e., tests of validity and model power beyond validation samples). Several related examples from the last decade appeared in models that were used to help evaluate subprime loans. These models used generic credit scores together with LTV, and perhaps a few other variables (or not), to predict subprime mortgage default risks in the years preceding the market meltdown. This was a hazardous extension of relatively simple model structures that worked better for prime mortgages (but had also previously been extended there). Because, for example, the large majority of subprime borrowers had weak credit records, generic credit scores did not help nearly as much to separate risk. Detailed credit attributes, for example, were needed to help better predict the default risks in subprime. Many pre-crisis subprime models of this kind were thus simplified but overly so, as they began with important omitted variables. This was not the only omitted-variables problem in this case, and not the only problem. Other observable mortgage risk factors were oddly absent in some models. Unobserved credit risk factors also tend to be correlated with observed risk factors, creating greater volatility and unexplained levels of higher risk in observed higher-credit-risk populations. Traditional subprime mortgages also focused mainly on poor-credit borrowers who needed cashout refinancing for debt consolidation or some other purpose. Such borrowers, in shaky financial condition, were more vulnerable to economic shocks, but a debt consolidating cashout mortgage could put them in a better position, with lower total monthly debt payments that were tax deductible. So far, so good—but an omitted capacity-risk variable was the number of previous cashout refinancings done (which loan brokers were incented to “churn”). The housing bubble allowed weak-capacity borrowers to sustain themselves through more extracted home equity, until the music stopped. Rate and fee structures of many subprime loans further heightened capacity risks. A significant population shift also occurred when subprime mortgage lenders significantly raised their allowed LTVs and added many more shaky purchase-money borrowers last decade; previously targeted affordable-housing programs from the banks and conforming-loan space had instead generally required stronger credit histories and capacity. Significant shifts like this in any modeled population require very extensive model robustness testing and scrutiny. But instead, projected subprime-pool losses from the major purchasers of subprime loans, and the ratings agencies, went down in the years just prior to the home-price meltdown, not up (to levels well below those seen in widely available private-label subprime pool losses from 1990’s loans). Rules and Tradition in Lieu of Sound Modeling Interestingly, however, these errant subprime models were not models that came into use in lender underwriting and automated underwriting systems for subprime—the front-end suppliers of new loans for private-label subprime mortgage-backed securities. Unlike the conforming-loan space, where automated underwriting using statistical mortgage credit scoring models grew dramatically in the 1990s, underwriting in subprime, including automated underwriting, remained largely based on traditional rules. These rules were not bad at rank-ordering the default risks, as traditional classifications of subprime A-, B, C and D loans showed. However, the rules did not adapt well to changing borrower populations and growing home-price risks either. Generic credit scores improved for most subprime borrowers last decade as they were buoyed by the general housing boom and economic growth. As a result, subprime-lender-rated C and D loans largely disappeared and the A- risk classifications grew substantially. Moreover, in those few cases where statistical credit scoring models were estimated on subprime loans, they identified and separated the risks within subprime much better than the traditional underwriting rules. (I authored an invited article early last decade, which included a graph, p. 222, that demonstrated this, Journal of Housing Research.) But statistical credit scoring models were scarcely or never used in most subprime mortgage lending. In Part II, I’ll discuss where models are most needed now in mortgages. Footnotes: [1] While credit scoring models performed better than most others, modelers can certainly do more to improve and learn from the performance declines at the height of the home-price meltdown. Various approaches have been undertaken to seek such improvements. [2] Even strategic mortgage defaults, while comprising a relatively larger share of strong-credit borrower defaults, have not significantly changed the traditional rank-ordering, as strategic defaults occur across the credit spectrum (weaker credit histories include borrowers with high income and assets).
By: Staci Baker Just before the holidays, the Fed released proposed rules, which implement Sections 165 and 166 of the Dodd-Frank Act. According to The American Bankers Association, “The proposals cover such issues as risk-based capital requirements, leverage, resolution planning, concentration limits and the Fed’s plans to regulate large, interconnected financial institutions and nonbanks.” How will these rules affect you? One of the biggest concerns that I have been hearing from institutions is the affect that the proposed rules will have on profitability. Greater liquidity requirements, created by both the Dodd-Frank Act and Basel III Rules, put pressure on banks to re-evaluate which lending segments they will continue to participate in, as well as impact the funds available for lending to consumers. What are you doing to proactively combat this? Within the Dodd-Frank Act is the Durbin Amendment, which regulates the interchange fee an issuer can charge a consumer. As I noted in my prior blog detailing the fee cap associated with the Durbin Amendment, it’s clear that these new regulations in combination with previous rulings will continue to put downward pressures on bank profitability. With all of this to consider, how will banks modify their business models to maintain a healthy bottom line, while keeping customers happy? Over my next few blog posts, I will take a look at the Dodd-Frank Act’s affect on an institution’s profitability and highlight best practices to manage the impact to your organization.
For as long as there have been loans, there has been credit risk and risk management. In the early days of US banking, the difficulty in assessing risk meant that lending was severely limited, and many people were effectively locked out of the lending system. Individual review of loans gave way to numerical scoring systems used to make more consistent credit decisions, which later evolved into the statistically derived models we know today. Use of credit scores is an essential part of almost every credit decision made today. But what is the next evolution of credit risk assessment? Does that current look at a single number tell all we need to know before extending credit? As shown in a recent score stability study, VantageScoreSM remains very predictive even in highly volatile cycles. While generic risk scores remain the most cost-effective, expedient and compliant method of assessing risk, this last economic cycle clearly shows a need for the addition of other metrics (including other generic scores) to more fully illuminate the inherent risk of an individual from every angle. We’ve seen financial institutions tightening their lending policies in response to recent market conditions, sometimes to the point of hampering growth. But what if there was an opportunity to relook at this strategy with additional analytics to ensure continued growth without increasing risk? We'll plan to explore that further over the coming weeks, so stick with me. And if there is a specific question or idea on your mind, leave a comment and we'll cover that too.
By: Staci Baker Just before the holidays, the Fed released proposed rules, which implement Sections 165 and 166 of the Dodd-Frank Act. According to The American Bankers Association, “The proposals cover such issues as risk-based capital requirements, leverage, resolution planning, concentration limits and the Fed’s plans to regulate large, interconnected financial institutions and nonbanks.” How will these rules affect you? One of the biggest concerns that I have been hearing from institutions is the affect that the proposed rules will have on profitability. Greater liquidity requirements, created by both the Dodd-Frank Act and Basel III Rules, put pressure on banks to re-evaluate which lending segments they will continue to participate in, as well as impact the funds available for lending to consumers. What are you doing to proactively combat this? Within the Dodd-Frank Act is the Durbin Amendment, which regulates the interchange fee merchants are charged. As I noted in my prior blog detailing the fee cap associated with the Durbin Amendment, it’s clear that these new regulations in combination with previous rulings will continue to put downward pressures on bank profitability. With all of this to consider, how will banks modify their business models to maintain a healthy bottom line, while keeping customers happy? Over my next few blog posts, I will take a look at the Dodd-Frank Act’s affect on an institution’s profitability and highlight best practices to manage the impact to your organization.
As we kick off the new year, I thought I’d dedicate a few blog posts to cover what some of the consumer credit trends are pointing to for potential growth opportunities in 2012, specifically on new loan originations for bankcard, automotive and real estate lending. With the holiday season behind us (and if you’re anything like me, you have the credit card statements to prove it!), I thought I’d start off with bankcards for my first post of the year. Everyone’s an optimist at the start of a new year and bankcard issuers have a right to feel cautiously optimistic about 2012 based on the trends of last year. In the second quarter of 2011, origination volumes grew to nearly $47B, up 28% from the same quarter a year earlier. Actually, originations have been steadily growing since the middle of 2010 with increasing distribution across all VantageScore risk bands and an impressive 42% increase in A paper volume. So, is bankcard the new power portfolio for growth in 2012? The broad origination risk distribution may signal the return of balance-carrying consumers (aka: revolvers) from those that pay with credit cards, but pay off the balance every month (aka: transactors). The tighter lending criteria imposed in recent years has improved portfolio performance significantly, but at the expense of interest fee profitability from revolver use. This could change as more credit cards are put in the hands of a broader consumer risk base. And as consumer confidence continues to grow, (it reached 64.5 in December, 10 points higher than November according to the Conference Board) , consumers in all risk categories will no doubt begin to leverage credit cards more heavily for continued discretionary spend, as highlighted in the most recent Experian – Oliver Wyman quarterly webinar. Of course, portfolio growth with the increased risk exposure requires a watchful eye on the delinquency performance of outstanding balances. We continue to be at or near historic lows for delinquency, but did see a small uptick in early stage delinquencies in the third quarter of 2011. That being said, issuers appear to have a good pulse on the card-carrying consumer and are capitalizing on the improved payment behavior to maximize their risk/reward payoff. So all-in-all, strong 2011 results and portfolio positioning has set the table for a promising 2012. Add an improving economy to the mix and card issuers could shift from cautious to confident in their optimism for the new year.
By: Joel Pruis Small Business Application Requirements The debate on what constitutes a small business application is probably second only to the ongoing debate around centralized vs. decentralized loan authority (but we will get to that topic in a couple of blogs later). We have a couple of topics that need to be considered in this discussion, namely: 1. When is an application an application? 2. Do you process an incomplete application? When is an application an application? Any request by a small business with annual sales of $1,000,000 or less falls under Reg B. As we all know because of this regulation we have to maintain proper records of when we received an application and when a decision on the application was made as well as communicated to the client. To keep yourself out of trouble, I recommend that there be a small business application form (paper or electronic) and that you have clearly stated the information required for a completed application in your small business application procedures. The form removes ambiguities in the application process and helps with the compliance documentation. One thing is for certain – when you request a personal credit bureau on the small business owner(s)/guarantor(s) and you currently do not have any credit exposure to the individual(s) – you have received an application and to this there is no debate. Bottom line is that you need to define your application and do so using objective criteria. Subjective criteria leaves room for interpretation and individual interpretation leaves doubt in the compliance area. Information requirements Whether or not you use a generic or custom small business scorecard or no scorecard at all, there are some baseline data segments that are important to collect on the small business applicant: · Requested amount and purpose for the funds · Collateral (if necessary based upon the product terms and conditions) · General demographics on the business o Name and location o Business Entity type (corporation, llc, partnership, etc.) o Product and/or service provided o Length of time in business o Current banking relationship · General demographics on the owners/guarantors o Names and addresses o Current banking relationship o Length of time with the business · External data reports on the business and/or guarantors o Business Report o Personal Credit Bureau on the owners/guarantors · Financial Statements (?) – we’ll talk about that in part II of this post. The demographics and the existing banking relationship are likely not causing any issues with anyone and the requested amount and use of funds is elementary to the process. Probably the greatest debate is around the collection of financial information and we are going to save that debate for the next post. The non-financial information noted above provides sufficient data to pull personal credit bureaus on the owners/guarantors and the business bureau on the actual borrower. We have even noted some additional data informing us the length of time the business has been in existence and where the banking relationship is currently held for both the business and the owners. But what additional information should be requested or should I say required? We have to remember that the application is not only to support the ability to render a decision but also supports the ability to document the loan and maybe even serve as a portion of the loan documentation. We need to consider the following: · How standardized are the products we offer? · Do we allow for customization of collateral to be offered? · Do we have standard loan/fee pricing? · Is automatic debit for the loan payments required? Optional? Not available? · Are personal guarantees required? Optional? We again go back to the 80/20 rule. Product standardization is beneficial and optimal when we have high volumes and low dollars. The smaller the dollar size of the request/relationship the more standardized we need to have our products and as a result our application can be more streamlined. When we do not negotiate rate, we do not need to have a space to note requested rate. When we do not negotiate on personal guarantees we always require the personal financial information be collected on all owners of the business (some exceptions for very small ownership interests). Auto-debit for the loan payments means we always need to have some form of a DDA account with our institution. I think you get the point that for the highest volume of applications we standardize and thus streamline the process through the removal of ambiguity. Do you process an incomplete application? The most common argument for processing an incomplete application is that if we know we are going to decline the application based upon information on the personal credit bureau, why go through the effort of collecting and spreading the financial information. Two significant factors make this argument moot: customer satisfaction and fair lending regulation. Customer satisfaction This is based upon the ease of doing business with the financial institution. More specifically the number of contact points or information requests that are required during the process. Ideally the number of contact points that are required once the applicant has decided to make a financing request should be minimal the information requirements clearly communicated up front and fully collected prior to rendering a decision. The idea that a quick no is preferable to submitting a full application actually is working to make the declination process more efficient than the actual approval process. So in other words we are making the process more efficient and palatable for those clients we do NOT consider acceptable versus those clients that ARE acceptable. Secondly, if we accept and process incomplete applications, we are actually mis-prioritizing the application volume. Incomplete applications should never be processed ahead of completed packages yet under the quick no objective, the incomplete application is processed ahead of completed applications simply based upon date and time of submission. Consequently we are actually incenting and fostering the submission of incomplete applications by our lenders. Bluntly this is a backward approach that only serves to make the life of the relationship manager more efficient and not the client. Fair lending regulation This perspective poses a potential issue when it comes to consistency. In my 10 years working with hundreds of financial institutions, only a very small minority of times have I encountered a financial institution that is willing to state with absolute certainty that a particular characteristic will cause an application to e declined 100% of the time. As a result, I wish to present this scenario: · Applicant A provides an incomplete application (missing financial statements, for example). o Application is processed in an incomplete status with personal and business bureaus pulled. o Personal credit bureau has blemishes which causes the financial institution to decline the application o Process is complete · Applicant B provides a completed application package with financial statements o Application is processed with personal and business bureaus pulled, financial statements spread and analysis performed o Personal credit bureau has the same blemishes as Applicant A o Financial performance prompts the underwriter or lender to pursue an explanation of why the blemishes occurred and the response is acceptable to the lender/underwriter. Assuming Applicant A had similar financial performance, we have a case of inconsistency due to a portion of the information that we “state” is required for an application to be complete yet was not received prior to rendering the decision. Bottom line the approach causes doubt with respect to inconsistent treatment and we need to avoid any potential doubt in the minds of our regulators. Let’s go back to the question of financial statements. Check back Thursday for my follow-up post, or part II, where we’ll cover the topic in greater detail.
By: Joel Pruis The debate on what constitutes a small business application is probably second only to the ongoing debate around centralized vs. decentralized loan authority (but we will get to that topic in a couple of blogs later). We have a couple of topics that need to be considered in this discussion, namely: 1. When is an application an application? 2. Do you process an incomplete application? When is an application an application? Any request by a small business with annual sales of $1,000,000 or less falls under Reg B. As we all know because of this regulation we have to maintain proper records of when we received an application and when a decision on the application was made as well as communicated to the client. To keep yourself out of trouble, I recommend that there be a small business application form (paper or electronic) and that you have clearly stated the information required for a completed application in your small business application procedures. The form removes ambiguities in the application process and helps with the compliance documentation. One thing is for certain – when you request a personal credit bureau on the small business owner(s)/guarantor(s) and you currently do not have any credit exposure to the individual(s) – you have received an application and to this there is no debate. Bottom line is that you need to define your application and do so using objective criteria. Subjective criteria leaves room for interpretation and individual interpretation leaves doubt in the compliance area. Information requirements Whether or not you use a generic or custom small business scorecard or no scorecard at all, there are some baseline data segments that are important to collect on the small business applicant: Requested amount and purpose for the funds Collateral (if necessary based upon the product terms and conditions) General demographics on the business Name and location Business Entity type (corporation, llc, partnership, etc.) Product and/or service provided Length of time in business Current banking relationship General demographics on the owners/guarantors Names and addresses Current banking relationship Length of time with the business External data reports on the business and/or guarantors Business Report Personal Credit Bureau on the owners/guarantors Financial Statements (??) – we’ll talk about that in part II of this post. The demographics and the existing banking relationship are likely not causing any issues with anyone and the requested amount and use of funds is elementary to the process. Probably the greatest debate is around the collection of financial information and we are going to save that debate for the next post. The non-financial information noted above provides sufficient data to pull personal credit bureaus on the owners/guarantors and the business bureau on the actual borrower. We have even noted some additional data informing us the length of time the business has been in existence and where the banking relationship is currently held for both the business and the owners. But what additional information should be requested or should I say required? We have to remember that the application is not only to support the ability to render a decision but also supports the ability to document the loan and maybe even serve as a portion of the loan documentation. We need to consider the following: How standardized are the products we offer? Do we allow for customization of collateral to be offered? Do we have standard loan/fee pricing? Is automatic debit for the loan payments required? Optional? Not available? Are personal guarantees required? Optional? We again go back to the 80/20 rule. Product standardization is beneficial and optimal when we have high volumes and low dollars. The smaller the dollar size of the request/relationship the more standardized we need to have our products and as a result our application can be more streamlined. When we do not negotiate rate, we do not need to have a space to note requested rate. When we do not negotiate on personal guarantees we always require the personal financial information be collected on all owners of the business (some exceptions for very small ownership interests). Auto-debit for the loan payments means we always need to have some form of a DDA account with our institution. I think you get the point that for the highest volume of applications we standardize and thus streamline the process through the removal of ambiguity. Do you process an incomplete application? The most common argument for processing an incomplete application is that if we know we are going to decline the application based upon information on the personal credit bureau, why go through the effort of collecting and spreading the financial information. Two significant factors make this argument moot: customer satisfaction and fair lending regulation. Customer satisfaction This is based upon the ease of doing business with the financial institution. More specifically the number of contact points or information requests that are required during the process. Ideally the number of contact points that are required once the applicant has decided to make a financing request should be minimal the information requirements clearly communicated up front and fully collected prior to rendering a decision. The idea that a quick no is preferable to submitting a full application actually is working to make the declination process more efficient than the actual approval process. So in other words we are making the process more efficient and palatable for those clients we do NOT consider acceptable versus those clients that ARE acceptable. Secondly, if we accept and process incomplete applications, we are actually mis-prioritizing the application volume. Incomplete applications should never be processed ahead of completed packages yet under the quick no objective, the incomplete application is processed ahead of completed applications simply based upon date and time of submission. Consequently we are actually incenting and fostering the submission of incomplete applications by our lenders. Bluntly this is a backward approach that only serves to make the life of the relationship manager more efficient and not the client. Fair lending regulation This perspective poses a potential issue when it comes to consistency. In my 10 years working with hundreds of financial institutions, only a very small minority of times have I encountered a financial institution that is willing to state with absolute certainty that a particular characteristic will cause an application to e declined 100% of the time. As a result, I wish to present this scenario: Applicant A provides an incomplete application (missing financial statements, for example). {C}Application is processed in an incomplete status with personal and business bureaus pulled. Personal credit bureau has blemishes which causes the financial institution to decline the application Process is complete Applicant B provides a completed application package with financial statements Application is processed with personal and business bureaus pulled, financial statements spread and analysis performed Personal credit bureau has the same blemishes as Applicant A Financial performance prompts the underwriter or lender to pursue an explanation of why the blemishes occurred and the response is acceptable to the lender/underwriter. Assuming Applicant A had similar financial performance, we have a case of inconsistency due to a portion of the information that we “state” is required for an application to be complete yet was not received prior to rendering the decision. Bottom line the approach causes doubt with respect to inconsistent treatment and we need to avoid any potential doubt in the minds of our regulators. Let’s go back to the question of financial statements. Check back Thursday for my follow-up post, or part II, where we’ll cover the topic in greater detail.
By: Joel Pruis Part I – New Application Volume and the Business Banker: Generating small business or business banking applications may be one of the hottest topics in this segment at this time. Loan demand is down and the pool of qualified candidates seems to be down as well. Trust me, I am not going to jump on the easy bandwagon and state that the financial institutions have stopped pursuing small business loan applications. As I work across the country, I have yet to see a financial institution that is not actively pursuing small business loan applications. Loan growth is high on everyone’s priority and it will be for some time. But where have all the applicants gone? Based upon our data, the trend in application volume from 2006 to 2010 is as follows: Chart displays 2010 values: So at face value, we see that actually, overall applications are down (1,032 in 2006 to 982 in 2010) while the largest financial institutions in the study were actually up from 18,616 to 25,427. Furthermore the smallest financial institutions with assets less than $500 million showed a significant increase from 167 to 276. An increase of 65% from the 2006 levels! But before we get too excited, we need to look a little further. When we are talking about increasing application volume we are focusing on applications for new exposure or a new extension of credit and not renewals. The application count in the above chart includes renewals. So let’s take a look at the comparison of New Request Ratio between 2006 and 2010. Chart displays 2010 values: So using this data in combination with the total application count we get the following measurements of new application volume in actual numbers. So once we get under the numbers, we see that the gross application numbers truly don’t tell the whole story. In fact we could classify the change in new application volume as follows: So why did the credit unions and community banks do so well while the rest held steady or dropped significantly? The answer is based upon a few factors: In this blog we are going to focus on the first – Field Resources. The last two factors – Application Requirements and Underwriting Criteria – will be covered in the next two blogs. While they have a significant impact on the application volume and likely are the cause of the application volume shift from 2006 to 2010, each represents a significant discussion that cannot be covered as a mere sub topic. More to come on those two items. Field Resources pursuing Small Business Applications The Business Banker Focus. Focus. Focus. The success of the small business segment depends upon the focus of the field pursuing the applications. As we move up in the asset size of the financial institution we see more dedicated field resources to the Small Business/Business Banking segment. Whether these roles are called business bankers, small business development officers or business banking specialists, the common denominator is that they are dedicated to the small-business/ business banking space. Their goals depend on their performance in this segment and they cannot pursue other avenues to achieve their targets or goals. When we start to review the financial institutions in the less than $20B segment, the use of a dedicated business banker begins to diminish. Marketing segments and/or business development segmentation is blurred at best and the field resource is better characterized as a Commercial Lender or Commercial Relationship Manager. The Commercial Lender is tasked with addressing the business lending needs across a particular region. Goals are based upon total dollars generated and there is no restriction outside of the legal or in house lending limit of the specific financial institution. In this scenario, the notion of any focus on small business is left to the individual commercial lender. You will find some commercial lenders that truly enjoy and devote their efforts to the small business/business banking space. These individuals enjoy working with the smaller business for a variety of reasons such as the consultative approach (small businesses are hungry for advice while the larger businesses tend to get their advice elsewhere) or the ability to use one’s lending authority. Unfortunately while your financial institution may have such commercial lenders (one’s that are truly working solely in the small business or business banking segment) to change that individual’s title or formally commit them to working only in the small business/business banking segment is often perceived as a demotion. It is this perception that continues to hinder the progress of financial institutions with assets between $500 million and $20 billion from truly excelling in the small business/business banking space. Reality is that the best field resource to generate the small business/business banking application volume available to your financial institution is through the dedicated individual known as the Business Banker. Such an individual is capable of generate up to 250 applications (for the truly high performing) per year. Even if we scale this back to 150 applications in a given year for new credit volume at an average request of $106,929 (the lowest dollar of the individual peer groups), the business banker would be generating total application dollars of $16,039,350. If we imply a 50% approval/closure rate, the business banker would be able to generate a total of $8,019,675 in new credit exposure annually. Such exposure would have the potential of generating a net interest margin of $240,590 assuming a 3% NIM. Not too bad.
By: Mike Horrocks Earlier this week, my wife and I were discussing the dinner plans for Thanksgiving. The yams, cranberries, and pumpkin pies were purchased and the secret family recipes were pulled out of the cupboard. Everything was ready…we thought. Then the topic of the turkey was brought up. In the buzz of work, family, kids, etc., both of us had forgotten to get the turkey. We had each thought the other was covering this purchase and had scratched if off our respective lists. Our Thanksgiving dinner was at risk! This made me think of what best practices from our industry could be utilized if I was going to mitigate risks and pull off the perfect dinner. So I pulled the page from the Basel Committee on Banking Supervision that defines operational risk as "the risk of loss resulting from inadequate or failed internal processes, people, systems or external events” and I have some suggestions that I think work for both your Thanksgiving dinner and for your existing loan portfolios. First, let’s cover “inadequate or failed processes”. Clearly our shopping list process failed. But how are your portfolio management processes? Are they clearly documented and can they be implemented throughout the organization? Your processes should be as well communicated and documented as the “Smashed Yam Bake” recipe or you may be at risk. Next, let focus on the “people and systems”. People make mistakes – learn from them, correct them, and try to get the “systems” to make it so there are fewer mistakes. For example, I don’t want the risk of letting the turkey cook too long, so I use a remote meat thermometer. Ok, it is a little geeky; however the turkey has come out perfect every year. What systems do you have in place to make your quarterly reviews of the portfolio more consistent and up to your standards? Lastly, how do I mitigate those “external events”? Odds are I will be able to still get a turkey tonight. If not, I talked to a friend of mine who is a chef and I have the plans for a goose. How flexible are your operations and how accessible are you to the subject matter experts that can get you out of those situations? A solid risk management program takes into account unforeseen events and can make them into opportunities. So as the Horrocks family gathered in Norman Rockwell like fashion this Thanksgiving, a moment of thanks was given to the folks on the Basel committee. Likewise in your next risk review, I hope you can give thanks for the minimized losses and mitigated risks. Otherwise, we will have one thing very much in common…our goose will be cooked.
This first question in our five-part series on the FFIEC guidance and what it means Internet banking. Check back each day this week for more Q&A on what you need to know and how to prepare for the January 2012 deadline. Question: What does “layered security” actually mean? “Layered” security refers to the arrangement of fraud tools in a sequential fashion. A layered approach starts with the most simple, benign and unobtrusive methods of authentication and progresses toward more stringent controls as the activity unfolds and the risk increases. Consider a customer who logs onto an on-line banking session to execute a wire transfer of funds to another account. The layers of security applied to this activity might resemble: 1. Layer One- Account log-in. Security = valid ID and Password must be provided 2. Layer Two- Wire transfer request. Security= IP verification/confirmation that this PC has been used to access this account previously. 3. Layer Three- Destination Account provided that has not been used to receive wire transfer funds in the past. Security= Knowledge Based Authentication Layered security provides an organization with the ability to handle simple customer requests with minimal security, and to strengthen security as risks dictate. A layered approach enables the vast majority of low risk transactions to be completed without unnecessary interference while the high-risk transactions are sufficiently verified. _____________ Look for part two of our five-part series tomorrow.
With the most recent guidance newly issued by the Federal Financial Institutions Examination Council (FFIEC) there is renewed conversation about knowledge based authentication. I think this is a good thing. It brings back into the forefront some of the things we have discussed for a while, like the difference between secret questions and dynamic knowledge based authentication, or the importance of risk based authentication. What does the new FFIEC guidance say about KBA? Acknowledging that many institutions use challenge questions, the FFIEC guidance highlights that the implementation of challenge questions can greatly impact efficacy of its usefulness. Chances are you already know this. Of greater importance, though, is the fact that the FFIEC guidelines caution on the use of less sophisticated systems and information that can be easily guessed or obtained from an Internet search, given the amount of information available. As mentioned above, the FFIEC guidelines call for questions that “do not rely on information that is often publicly available,” recommending instead a broad range of data assets on which to base questions. This is an area knowledge based authentication users should review carefully. At this point in time it is perfectly appropriate to ask, “Does my KBA provider rely on data that is publicly sourced” If you aren’t sure, ask for and review data sources. At a minimum, you want to look for the following in your KBA provider: · Questions! Diverse questions from broad data categories, including credit and noncredit assets · Consumer question performance as one of the elements within an overall risk-based decisioning policy · Robust performance monitoring. Monitor against established key performance indicators and do it often · Create a process to rotate questions and adjust access parameters and velocity limits. Keep fraudsters guessing! · Use the resources that are available to you. Experian has compiled information that you might find helpful: www.experian.com/ffiec Finally, I think the release of the new FFIEC guidelines may have made some people wonder if this is the end of KBA. I think the answer is a resounding “No.” Not only do the FFIEC guidelines support the continued use of knowledge based authentication, recent research suggests that KBA is the authentication tool identified as most effective by consumers. Where I would draw caution is when research doesn’t distinguish between “secret questions” and dynamic knowledge based authentication, which we all know is very different.
By: Mike Horrocks Have you ever been struck by a turtle or even better burnt by water skies that were on fire? If you are like me, these are not accidents that I think will ever happen to me and I'm not concerned that my family doctor didn't do a rotation in medical school to specialize in treating them. On October 1, 2013, however, doctors and hospitals across the U.S. will have ability to identify, log, bill, and track those accidents and thousands of other very specific medical events. In fact the list will jump from a current 18,000 medical codes to 140,000 medical codes. Some people hail this as a great step toward the management of all types of medical conditions, whereas others view it as a introduction of noise in a medical system already over burdened. What does this have to do with credit risk management you ask? When I look at the amount of financial and non-financial data that the credit industry has available to understand the risk of our consumer or business clients, I wonder where we are in the range of “take two aspirins and call me in the morning” to “[the accident] occurred inside a chicken coop” (code: Y9272). Are we only identifying a risky consumer after they have defaulted on a loan? Or are we trying to find a pattern in the consumer's purchases at a coffee house that would correlate with some other data point to indicate risk when the moon is full? The answer is somewhere in between and it will be different for each institution. Let’s start with what is known to be predictable when it comes to monitoring our portfolios - data and analytics, coupled with portfolio risk monitoring to minimize risk exposure - and then expand that over time. Click here for a recent case study that demonstrates this quite successfully with one of our clients. Next steps could include adding in analytics and/or triggers to identify certain risks more specifically. When it comes to risk, incorporating attributes or a solid set of triggers, for example, that will identify risk early on and can drill down to some of the specific events, combined with technology that streamlines portfolio management processes - whether you have an existing system in place or in search of a migration - will give you better insight to the risk profile of your consumers. Think about where your organization lies on the spectrum. If you are already monitoring your portfolio with some of these solutions, consider what the next logical step to improve the process is - is it more data, or advanced analytics using that data, a combination of both, or perhaps it's a better system in place to monitoring the risk more closely. Wherever you are, don’t let your institution have the financial equivalent need for these new medical codes W2202XA, W2202XD, and W2202XS (injuries resulting from walking into a lamppost once, twice, and sequentially).
By: Kari Michel The way medical debts are treated in scores may change with the introduction of June 2011, Medical Debt Responsibility Act. The Medical Debt Responsibility Act would require the three national credit bureaus to expunge medical collection records of $2,500 or less from files within 45 days of their being paid or settled. The bill is co-sponsored by Representative Heath Shuler (D-N.C.), Don Manzullo (R-Ill.) and Ralph M. Hall (R-Texas). As a general rule, expunging predictive information is not in the best interest of consumers or credit granters -- both of which benefit when credit reports and scores are as accurate and predictive as possible. If any type of debt information proven to be predictive is expunged, consumers risk exposure to improper credit products as they may appear to be more financially equipped to handle new debt than they truly are. Medical debts are never taken into consideration by VantageScore® Solutions LLC if the debt reporting is known to be from a medical facility. When a medical debt is outsourced to a third-party collection agency, it is treated the same as other debts that are in collection. Collection accounts of lower than $250, or ones that have been settled, have less impact on a consumer’s VantageScore® credit score. With or without the medical debt in collection information, the VantageScore® credit score model remains highly predictive.
As I’m sure you are aware, the Federal Financial Institutions Examination Council (FFIEC) recently released its, "Supplement to Authentication in an Internet Banking Environment" guiding financial institutions to mitigate risk using a variety of processes and technologies as part of a multi-layered approach. In light of this updated mandate, businesses need to move beyond simple challenge and response questions to more complex out-of-wallet authentication. Additionally, those incorporating device identification should look to more sophisticated technologies well beyond traditional IP address verification alone. Recently, I contribute to an article on how these new guidelines might affect your institution. Check it out here, in full: http://ffiec.bankinfosecurity.com/articles.php?art_id=3932 For more on what the FFIEC guidelines mean to you, check out these resources - which also gives you access to a recent Webinar.
The following article was originally posted on August 15, 2011 by Mike Myers on the Experian Business Credit Blog. Last time we talked about how credit policies are like a plant grown from a seed. They need regular review and attention just like the plants in your garden to really bloom. A credit policy is simply a consistent guideline to follow when decisioning accounts, reviewing accounts, collecting and setting terms. Opening accounts is just the first step. Here are a couple of key items to consider in reviewing accounts: How many of your approved accounts are paying you late? What is their average days beyond terms? How much credit have they been extended? What attributes of these late paying accounts can predict future payment behavior? I recently worked with a client to create an automated credit policy that consistently reviews accounts based on predictive credit attributes, public records and exception rules using the batch account review decisioning tools within BusinessIQ. The credit team now feels like they are proactively managing their accounts instead of just reacting to them. A solid credit policy not only focuses on opening accounts, but also on regular account review which can help you reduce your overall risk.