Topics

Part I: Types and Complexity of Models, and Unobservable or Omitted Variables or Relationships By: John Straka Since the financial crisis, it’s not unusual to read articles here and there about the “failure of models.” For example, a recent piece in Scientific American critiqued financial model “calibration,” proclaiming in its title, Why Economic Models are Always Wrong. In the mortgage business, for example, it is important to understand where models have continued to work, as well as where they failed, and what this all means for the future of your servicing and origination business. I also see examples of loose understanding about best practices in relation to the shortcomings of models that do work, and also about the comparative strengths and weaknesses of alternative judgmental decision processes. With their automation efficiencies, consistency, valuable added insights, and testability for reliability and robustness, statistical business models driven by extensive and growing data remain all around us today, and they are continuing to expand. So regardless of your views on the values and uses of models, it is important to have a clear view and sound strategies in model usage. A Categorization: Ten Types of Models Business models used by financial institutions can be placed in more than ten categories, of course, but here are ten prominent general types of models: Statistical credit scoring models (typically for default) Consumer- or borrower-response models Consumer- or borrower-characteristic prediction models Loss given default (LGD) and Exposure at default (EAD) models Optimization tools (these are not models, per se, but mathematical algorithms that often use inputs from models) Loss forecasting and simulation models and Value-at-risk (VAR) models Valuation, option pricing, and risk-based pricing models Profitability forecasting and enterprise-cash-flow projection models Macroeconomic forecasting models Financial-risk models that model complex financial instruments and interactions Types 8, 9 and 10, for example, are often built up from multiple component models, and for this reason and others, these model categories are not mutually exclusive. Types 1 through 3, for example, can also be built from individual-level data (typical) or group-level data. No categorical type listing of models is perfect, and this listing is also not intended to be completely exhaustive. The Strain of Complexity (or Model Ambition) The principle of Occam’s razor in model building, roughly translated, parallels the business dictum to “keep it simple, stupid.” Indeed, the general ordering of model types 1 through 10 above (you can quibble on the details) tends to correspond to growing complexity, or growing model ambition. Model types 1 and 2 typically forecast a rank-ordering, for example, rather than also forecasting a level. Credit scores and credit scoring typically seek to rank-order consumers in their default, loss, or other likelihoods, without attempting to project the actual level of default rates, for example, across the score distribution. Scoring models that add the dimension of level prediction increase this layer of complexity. In addition, model types 1 through 3 are generally unconditional predictors. They make no attempt to add the dimension of predicting the time path of the dependent variable. Predicting not just a consumer’s relative likelihood of an event over a future time period as a whole, for example, but also the event’s frequency level and time path of this level each year, quarter, or month, is a more complex and ambitious modeling endeavor. (This problem is generally approached through continuous or discrete hazard models.) While generalizations can be hazardous (exceptions can typically be found), it is generally true that, in the events leading up to and surrounding the financial crisis, greater model complexity and ambition was correlated with greater model failure. For example, at what is perhaps an extreme, Coval, Jurek, and Stafford (2009) have demonstrated how, for model type 10, even slight unexpected changes in default probabilities and correlations had a substantial impact on the expected payoffs and ratings of typical collateralized debt obligations (CDOs) with subprime residential mortgage-backed securities as their underlying assets. Nonlinear relationships in complex systems can generate extreme unreliability of system predictions. To a lesser but still significant degree, the mortgage- or housing-related models included or embedded in types 6 through 10 were heavily dependent on home-price projections and risk simulation, which caused significant “expected”-model failures after 2006. Home-price declines in 2007-2009 reached what had previously only been simulated as extreme and very unlikely stress paths. Despite this clear problem, given the inescapable large impact of home prices on any mortgage model or decision system (of any kind), it is generally acceptable to separate the failure of the home-price projection from any failure of the relative default and other model relationships built around the possible home-price paths. In other words, if a model of type 8, for example, predicted the actual profitability and enterprise cash flow quite well given the actual extreme path of home prices, then this model can be reasonably regarded as not having failed as a model per se, despite the clear, but inescapable reliance of the model’s level projections on the uncertain home-price outcomes. Models of type 1, statistical credit scoring models, generally continued to work well or reasonably well both in the years preceding and during the home-price meltdown and financial crisis. This is very largely due to these models’ relatively modest objective of simply rank-ordering risks, in general. To be sure, scoring models in mortgage, and more generally, were strongly impacted by the home price declines and unusual events of the bubble and subsequent recession, with deteriorated strength in risk separation. This can be seen, for example, in the recent VantageScore® credit score stress-test study, VantageScore® Stress Testing, which shows the lowest risk separation ability in the states with the worst home-price and unemployment outcomes (CA, AZ, FL, NV, MI). But these kinds of significant but comparatively modest magnitudes of deterioration were neither debilitating nor permanent for these models. In short, even in mortgage, scoring models generally held up pretty well, even through the crisis—not perfectly, but comparatively better than the more complex level-, system-, and path-prediction models. (see footnote 1) Scoring models have also relied more exclusively on microeconomic behavioral stabilities, rather than including macroeconomic risk modeling. Fortunately the microeconomic behavioral patterns have generally been much more stable. Weak-credit borrowers, for example, have long tended to default at significantly higher rates than strong credit borrowers—they did so preceding, and right through, the financial crisis, even as overall default levels changed dramatically; and they continue to do so today, in both strong and weak housing markets. (see footnote 2) As a general rule overall, the more complex and ambitious the model, the more complex are the many questions that have to be asked concerning what could go wrong in model risks. But relative complexity is certainly not the only type of model risk. Sometimes relative simplicity, otherwise typically desirable, can go in a wrong direction. Unobservable or Omitted Variables or Relationships No model can be perfect, for many reasons. Important determining variables may be unmeasured or unknown. Similarly, important parameters and relationships may differ significantly across different types of populations, and different time periods. How many models have been routinely “stress tested” on their robustness in handling different types of borrower populations (where unobserved variables tend to lurk) or different shifts in the mix of borrower sub-populations? This issue is more or less relevant depending on the business and statistical problem at hand, but overall, modeling practice has tended more often than not to neglect robustness testing (i.e., tests of validity and model power beyond validation samples). Several related examples from the last decade appeared in models that were used to help evaluate subprime loans. These models used generic credit scores together with LTV, and perhaps a few other variables (or not), to predict subprime mortgage default risks in the years preceding the market meltdown. This was a hazardous extension of relatively simple model structures that worked better for prime mortgages (but had also previously been extended there). Because, for example, the large majority of subprime borrowers had weak credit records, generic credit scores did not help nearly as much to separate risk. Detailed credit attributes, for example, were needed to help better predict the default risks in subprime. Many pre-crisis subprime models of this kind were thus simplified but overly so, as they began with important omitted variables. This was not the only omitted-variables problem in this case, and not the only problem. Other observable mortgage risk factors were oddly absent in some models. Unobserved credit risk factors also tend to be correlated with observed risk factors, creating greater volatility and unexplained levels of higher risk in observed higher-credit-risk populations. Traditional subprime mortgages also focused mainly on poor-credit borrowers who needed cashout refinancing for debt consolidation or some other purpose. Such borrowers, in shaky financial condition, were more vulnerable to economic shocks, but a debt consolidating cashout mortgage could put them in a better position, with lower total monthly debt payments that were tax deductible. So far, so good—but an omitted capacity-risk variable was the number of previous cashout refinancings done (which loan brokers were incented to “churn”). The housing bubble allowed weak-capacity borrowers to sustain themselves through more extracted home equity, until the music stopped. Rate and fee structures of many subprime loans further heightened capacity risks. A significant population shift also occurred when subprime mortgage lenders significantly raised their allowed LTVs and added many more shaky purchase-money borrowers last decade; previously targeted affordable-housing programs from the banks and conforming-loan space had instead generally required stronger credit histories and capacity. Significant shifts like this in any modeled population require very extensive model robustness testing and scrutiny. But instead, projected subprime-pool losses from the major purchasers of subprime loans, and the ratings agencies, went down in the years just prior to the home-price meltdown, not up (to levels well below those seen in widely available private-label subprime pool losses from 1990’s loans). Rules and Tradition in Lieu of Sound Modeling Interestingly, however, these errant subprime models were not models that came into use in lender underwriting and automated underwriting systems for subprime—the front-end suppliers of new loans for private-label subprime mortgage-backed securities. Unlike the conforming-loan space, where automated underwriting using statistical mortgage credit scoring models grew dramatically in the 1990s, underwriting in subprime, including automated underwriting, remained largely based on traditional rules. These rules were not bad at rank-ordering the default risks, as traditional classifications of subprime A-, B, C and D loans showed. However, the rules did not adapt well to changing borrower populations and growing home-price risks either. Generic credit scores improved for most subprime borrowers last decade as they were buoyed by the general housing boom and economic growth. As a result, subprime-lender-rated C and D loans largely disappeared and the A- risk classifications grew substantially. Moreover, in those few cases where statistical credit scoring models were estimated on subprime loans, they identified and separated the risks within subprime much better than the traditional underwriting rules. (I authored an invited article early last decade, which included a graph, p. 222, that demonstrated this, Journal of Housing Research.) But statistical credit scoring models were scarcely or never used in most subprime mortgage lending. In Part II, I’ll discuss where models are most needed now in mortgages. Footnotes: [1] While credit scoring models performed better than most others, modelers can certainly do more to improve and learn from the performance declines at the height of the home-price meltdown. Various approaches have been undertaken to seek such improvements. [2] Even strategic mortgage defaults, while comprising a relatively larger share of strong-credit borrower defaults, have not significantly changed the traditional rank-ordering, as strategic defaults occur across the credit spectrum (weaker credit histories include borrowers with high income and assets).

By: Staci Baker Just before the holidays, the Fed released proposed rules, which implement Sections 165 and 166 of the Dodd-Frank Act. According to The American Bankers Association, “The proposals cover such issues as risk-based capital requirements, leverage, resolution planning, concentration limits and the Fed’s plans to regulate large, interconnected financial institutions and nonbanks.” How will these rules affect you? One of the biggest concerns that I have been hearing from institutions is the affect that the proposed rules will have on profitability. Greater liquidity requirements, created by both the Dodd-Frank Act and Basel III Rules, put pressure on banks to re-evaluate which lending segments they will continue to participate in, as well as impact the funds available for lending to consumers. What are you doing to proactively combat this? Within the Dodd-Frank Act is the Durbin Amendment, which regulates the interchange fee an issuer can charge a consumer. As I noted in my prior blog detailing the fee cap associated with the Durbin Amendment, it’s clear that these new regulations in combination with previous rulings will continue to put downward pressures on bank profitability. With all of this to consider, how will banks modify their business models to maintain a healthy bottom line, while keeping customers happy? Over my next few blog posts, I will take a look at the Dodd-Frank Act’s affect on an institution’s profitability and highlight best practices to manage the impact to your organization.

For as long as there have been loans, there has been credit risk and risk management. In the early days of US banking, the difficulty in assessing risk meant that lending was severely limited, and many people were effectively locked out of the lending system. Individual review of loans gave way to numerical scoring systems used to make more consistent credit decisions, which later evolved into the statistically derived models we know today. Use of credit scores is an essential part of almost every credit decision made today. But what is the next evolution of credit risk assessment? Does that current look at a single number tell all we need to know before extending credit? As shown in a recent score stability study, VantageScoreSM remains very predictive even in highly volatile cycles. While generic risk scores remain the most cost-effective, expedient and compliant method of assessing risk, this last economic cycle clearly shows a need for the addition of other metrics (including other generic scores) to more fully illuminate the inherent risk of an individual from every angle. We’ve seen financial institutions tightening their lending policies in response to recent market conditions, sometimes to the point of hampering growth. But what if there was an opportunity to relook at this strategy with additional analytics to ensure continued growth without increasing risk? We'll plan to explore that further over the coming weeks, so stick with me. And if there is a specific question or idea on your mind, leave a comment and we'll cover that too.

By: Staci Baker Just before the holidays, the Fed released proposed rules, which implement Sections 165 and 166 of the Dodd-Frank Act. According to The American Bankers Association, “The proposals cover such issues as risk-based capital requirements, leverage, resolution planning, concentration limits and the Fed’s plans to regulate large, interconnected financial institutions and nonbanks.” How will these rules affect you? One of the biggest concerns that I have been hearing from institutions is the affect that the proposed rules will have on profitability. Greater liquidity requirements, created by both the Dodd-Frank Act and Basel III Rules, put pressure on banks to re-evaluate which lending segments they will continue to participate in, as well as impact the funds available for lending to consumers. What are you doing to proactively combat this? Within the Dodd-Frank Act is the Durbin Amendment, which regulates the interchange fee merchants are charged. As I noted in my prior blog detailing the fee cap associated with the Durbin Amendment, it’s clear that these new regulations in combination with previous rulings will continue to put downward pressures on bank profitability. With all of this to consider, how will banks modify their business models to maintain a healthy bottom line, while keeping customers happy? Over my next few blog posts, I will take a look at the Dodd-Frank Act’s affect on an institution’s profitability and highlight best practices to manage the impact to your organization.

As we kick off the new year, I thought I’d dedicate a few blog posts to cover what some of the consumer credit trends are pointing to for potential growth opportunities in 2012, specifically on new loan originations for bankcard, automotive and real estate lending. With the holiday season behind us (and if you’re anything like me, you have the credit card statements to prove it!), I thought I’d start off with bankcards for my first post of the year. Everyone’s an optimist at the start of a new year and bankcard issuers have a right to feel cautiously optimistic about 2012 based on the trends of last year. In the second quarter of 2011, origination volumes grew to nearly $47B, up 28% from the same quarter a year earlier. Actually, originations have been steadily growing since the middle of 2010 with increasing distribution across all VantageScore risk bands and an impressive 42% increase in A paper volume. So, is bankcard the new power portfolio for growth in 2012? The broad origination risk distribution may signal the return of balance-carrying consumers (aka: revolvers) from those that pay with credit cards, but pay off the balance every month (aka: transactors). The tighter lending criteria imposed in recent years has improved portfolio performance significantly, but at the expense of interest fee profitability from revolver use. This could change as more credit cards are put in the hands of a broader consumer risk base. And as consumer confidence continues to grow, (it reached 64.5 in December, 10 points higher than November according to the Conference Board) , consumers in all risk categories will no doubt begin to leverage credit cards more heavily for continued discretionary spend, as highlighted in the most recent Experian – Oliver Wyman quarterly webinar. Of course, portfolio growth with the increased risk exposure requires a watchful eye on the delinquency performance of outstanding balances. We continue to be at or near historic lows for delinquency, but did see a small uptick in early stage delinquencies in the third quarter of 2011. That being said, issuers appear to have a good pulse on the card-carrying consumer and are capitalizing on the improved payment behavior to maximize their risk/reward payoff. So all-in-all, strong 2011 results and portfolio positioning has set the table for a promising 2012. Add an improving economy to the mix and card issuers could shift from cautious to confident in their optimism for the new year.

By: Joel Pruis Small Business Application Requirements The debate on what constitutes a small business application is probably second only to the ongoing debate around centralized vs. decentralized loan authority (but we will get to that topic in a couple of blogs later). We have a couple of topics that need to be considered in this discussion, namely: 1. When is an application an application? 2. Do you process an incomplete application? When is an application an application? Any request by a small business with annual sales of $1,000,000 or less falls under Reg B. As we all know because of this regulation we have to maintain proper records of when we received an application and when a decision on the application was made as well as communicated to the client. To keep yourself out of trouble, I recommend that there be a small business application form (paper or electronic) and that you have clearly stated the information required for a completed application in your small business application procedures. The form removes ambiguities in the application process and helps with the compliance documentation. One thing is for certain – when you request a personal credit bureau on the small business owner(s)/guarantor(s) and you currently do not have any credit exposure to the individual(s) – you have received an application and to this there is no debate. Bottom line is that you need to define your application and do so using objective criteria. Subjective criteria leaves room for interpretation and individual interpretation leaves doubt in the compliance area. Information requirements Whether or not you use a generic or custom small business scorecard or no scorecard at all, there are some baseline data segments that are important to collect on the small business applicant: · Requested amount and purpose for the funds · Collateral (if necessary based upon the product terms and conditions) · General demographics on the business o Name and location o Business Entity type (corporation, llc, partnership, etc.) o Product and/or service provided o Length of time in business o Current banking relationship · General demographics on the owners/guarantors o Names and addresses o Current banking relationship o Length of time with the business · External data reports on the business and/or guarantors o Business Report o Personal Credit Bureau on the owners/guarantors · Financial Statements (?) – we’ll talk about that in part II of this post. The demographics and the existing banking relationship are likely not causing any issues with anyone and the requested amount and use of funds is elementary to the process. Probably the greatest debate is around the collection of financial information and we are going to save that debate for the next post. The non-financial information noted above provides sufficient data to pull personal credit bureaus on the owners/guarantors and the business bureau on the actual borrower. We have even noted some additional data informing us the length of time the business has been in existence and where the banking relationship is currently held for both the business and the owners. But what additional information should be requested or should I say required? We have to remember that the application is not only to support the ability to render a decision but also supports the ability to document the loan and maybe even serve as a portion of the loan documentation. We need to consider the following: · How standardized are the products we offer? · Do we allow for customization of collateral to be offered? · Do we have standard loan/fee pricing? · Is automatic debit for the loan payments required? Optional? Not available? · Are personal guarantees required? Optional? We again go back to the 80/20 rule. Product standardization is beneficial and optimal when we have high volumes and low dollars. The smaller the dollar size of the request/relationship the more standardized we need to have our products and as a result our application can be more streamlined. When we do not negotiate rate, we do not need to have a space to note requested rate. When we do not negotiate on personal guarantees we always require the personal financial information be collected on all owners of the business (some exceptions for very small ownership interests). Auto-debit for the loan payments means we always need to have some form of a DDA account with our institution. I think you get the point that for the highest volume of applications we standardize and thus streamline the process through the removal of ambiguity. Do you process an incomplete application? The most common argument for processing an incomplete application is that if we know we are going to decline the application based upon information on the personal credit bureau, why go through the effort of collecting and spreading the financial information. Two significant factors make this argument moot: customer satisfaction and fair lending regulation. Customer satisfaction This is based upon the ease of doing business with the financial institution. More specifically the number of contact points or information requests that are required during the process. Ideally the number of contact points that are required once the applicant has decided to make a financing request should be minimal the information requirements clearly communicated up front and fully collected prior to rendering a decision. The idea that a quick no is preferable to submitting a full application actually is working to make the declination process more efficient than the actual approval process. So in other words we are making the process more efficient and palatable for those clients we do NOT consider acceptable versus those clients that ARE acceptable. Secondly, if we accept and process incomplete applications, we are actually mis-prioritizing the application volume. Incomplete applications should never be processed ahead of completed packages yet under the quick no objective, the incomplete application is processed ahead of completed applications simply based upon date and time of submission. Consequently we are actually incenting and fostering the submission of incomplete applications by our lenders. Bluntly this is a backward approach that only serves to make the life of the relationship manager more efficient and not the client. Fair lending regulation This perspective poses a potential issue when it comes to consistency. In my 10 years working with hundreds of financial institutions, only a very small minority of times have I encountered a financial institution that is willing to state with absolute certainty that a particular characteristic will cause an application to e declined 100% of the time. As a result, I wish to present this scenario: · Applicant A provides an incomplete application (missing financial statements, for example). o Application is processed in an incomplete status with personal and business bureaus pulled. o Personal credit bureau has blemishes which causes the financial institution to decline the application o Process is complete · Applicant B provides a completed application package with financial statements o Application is processed with personal and business bureaus pulled, financial statements spread and analysis performed o Personal credit bureau has the same blemishes as Applicant A o Financial performance prompts the underwriter or lender to pursue an explanation of why the blemishes occurred and the response is acceptable to the lender/underwriter. Assuming Applicant A had similar financial performance, we have a case of inconsistency due to a portion of the information that we “state” is required for an application to be complete yet was not received prior to rendering the decision. Bottom line the approach causes doubt with respect to inconsistent treatment and we need to avoid any potential doubt in the minds of our regulators. Let’s go back to the question of financial statements. Check back Thursday for my follow-up post, or part II, where we’ll cover the topic in greater detail.

By: Joel Pruis The debate on what constitutes a small business application is probably second only to the ongoing debate around centralized vs. decentralized loan authority (but we will get to that topic in a couple of blogs later). We have a couple of topics that need to be considered in this discussion, namely: 1. When is an application an application? 2. Do you process an incomplete application? When is an application an application? Any request by a small business with annual sales of $1,000,000 or less falls under Reg B. As we all know because of this regulation we have to maintain proper records of when we received an application and when a decision on the application was made as well as communicated to the client. To keep yourself out of trouble, I recommend that there be a small business application form (paper or electronic) and that you have clearly stated the information required for a completed application in your small business application procedures. The form removes ambiguities in the application process and helps with the compliance documentation. One thing is for certain – when you request a personal credit bureau on the small business owner(s)/guarantor(s) and you currently do not have any credit exposure to the individual(s) – you have received an application and to this there is no debate. Bottom line is that you need to define your application and do so using objective criteria. Subjective criteria leaves room for interpretation and individual interpretation leaves doubt in the compliance area. Information requirements Whether or not you use a generic or custom small business scorecard or no scorecard at all, there are some baseline data segments that are important to collect on the small business applicant: Requested amount and purpose for the funds Collateral (if necessary based upon the product terms and conditions) General demographics on the business Name and location Business Entity type (corporation, llc, partnership, etc.) Product and/or service provided Length of time in business Current banking relationship General demographics on the owners/guarantors Names and addresses Current banking relationship Length of time with the business External data reports on the business and/or guarantors Business Report Personal Credit Bureau on the owners/guarantors Financial Statements (??) – we’ll talk about that in part II of this post. The demographics and the existing banking relationship are likely not causing any issues with anyone and the requested amount and use of funds is elementary to the process. Probably the greatest debate is around the collection of financial information and we are going to save that debate for the next post. The non-financial information noted above provides sufficient data to pull personal credit bureaus on the owners/guarantors and the business bureau on the actual borrower. We have even noted some additional data informing us the length of time the business has been in existence and where the banking relationship is currently held for both the business and the owners. But what additional information should be requested or should I say required? We have to remember that the application is not only to support the ability to render a decision but also supports the ability to document the loan and maybe even serve as a portion of the loan documentation. We need to consider the following: How standardized are the products we offer? Do we allow for customization of collateral to be offered? Do we have standard loan/fee pricing? Is automatic debit for the loan payments required? Optional? Not available? Are personal guarantees required? Optional? We again go back to the 80/20 rule. Product standardization is beneficial and optimal when we have high volumes and low dollars. The smaller the dollar size of the request/relationship the more standardized we need to have our products and as a result our application can be more streamlined. When we do not negotiate rate, we do not need to have a space to note requested rate. When we do not negotiate on personal guarantees we always require the personal financial information be collected on all owners of the business (some exceptions for very small ownership interests). Auto-debit for the loan payments means we always need to have some form of a DDA account with our institution. I think you get the point that for the highest volume of applications we standardize and thus streamline the process through the removal of ambiguity. Do you process an incomplete application? The most common argument for processing an incomplete application is that if we know we are going to decline the application based upon information on the personal credit bureau, why go through the effort of collecting and spreading the financial information. Two significant factors make this argument moot: customer satisfaction and fair lending regulation. Customer satisfaction This is based upon the ease of doing business with the financial institution. More specifically the number of contact points or information requests that are required during the process. Ideally the number of contact points that are required once the applicant has decided to make a financing request should be minimal the information requirements clearly communicated up front and fully collected prior to rendering a decision. The idea that a quick no is preferable to submitting a full application actually is working to make the declination process more efficient than the actual approval process. So in other words we are making the process more efficient and palatable for those clients we do NOT consider acceptable versus those clients that ARE acceptable. Secondly, if we accept and process incomplete applications, we are actually mis-prioritizing the application volume. Incomplete applications should never be processed ahead of completed packages yet under the quick no objective, the incomplete application is processed ahead of completed applications simply based upon date and time of submission. Consequently we are actually incenting and fostering the submission of incomplete applications by our lenders. Bluntly this is a backward approach that only serves to make the life of the relationship manager more efficient and not the client. Fair lending regulation This perspective poses a potential issue when it comes to consistency. In my 10 years working with hundreds of financial institutions, only a very small minority of times have I encountered a financial institution that is willing to state with absolute certainty that a particular characteristic will cause an application to e declined 100% of the time. As a result, I wish to present this scenario: Applicant A provides an incomplete application (missing financial statements, for example). {C}Application is processed in an incomplete status with personal and business bureaus pulled. Personal credit bureau has blemishes which causes the financial institution to decline the application Process is complete Applicant B provides a completed application package with financial statements Application is processed with personal and business bureaus pulled, financial statements spread and analysis performed Personal credit bureau has the same blemishes as Applicant A Financial performance prompts the underwriter or lender to pursue an explanation of why the blemishes occurred and the response is acceptable to the lender/underwriter. Assuming Applicant A had similar financial performance, we have a case of inconsistency due to a portion of the information that we “state” is required for an application to be complete yet was not received prior to rendering the decision. Bottom line the approach causes doubt with respect to inconsistent treatment and we need to avoid any potential doubt in the minds of our regulators. Let’s go back to the question of financial statements. Check back Thursday for my follow-up post, or part II, where we’ll cover the topic in greater detail.

Within the world of cyber security, a great deal of attention has been focused lately on the escalating hazards and frequency of data breaches, with considerable discussion on the high cost of such breaches. But as the industry has assessed the financial toll of breaches, it has never taken into account how data breaches harm reputations, brand image, and consequently a company's bottom line. Until now. A recently released Ponemon Institute study, sponsored by Experian’s Data Breach Resolution and believed to be the first of its kind, explores the “Reputation Impact of a Data Breach” to provide more context for the full scope of data breaches. The findings draw enlightening conclusions around the financial toll that data breaches wreak upon harmed corporate reputations, including these key takeaways: Reputation is one of an organization’s most important and valuable assets. Reputation and brand image are perceived as very valuable…and highly vulnerable to negative events, including a data breach. Calculating the value of reputation and brand reveals how valuable these assets are to an organization. The average value of brand and reputation for the study’s participating organizations was determined to be approximately $1.5 billion. Depending upon the type of information lost as a result of the breach, the average loss in the value of the brand ranged from $184 million to more than $330 million. Depending upon the type of breach, the value of brand and reputation could decline as much as 17 percent to 31 percent. Not all data breaches are equal. Some breaches are more devastating than others to an organization’s reputation and brand image, with the loss or theft of customer information ranked as the most devastating (followed by confidential financial business information and confidential non-financial business information). Data breaches occur in most organizations represented in this study and have at least a moderate or a significant impact on reputation and brand image. According to 82 percent of respondents, their organizations had a data breach involving sensitive or confidential information. Fifty-three percent say the data breaches had a moderate impact on reputation and brand image and 23 percent say it was significant. Most organizations in the study have had a data breach involving the theft of sensitive or confidential business information. On average these types of breaches have occurred 2.9 times in surveyed organizations, with the theft or loss of confidential financial information having the most significant impact on reputation and brand. Respondents strongly believe in understanding the root cause of the breach and protecting victims from identity theft. When asked what their organizations did following a breach to preserve or restore brand and reputation, the top three steps are: conduct investigations and forensics, work closely with law enforcement and protect those affected from potential harms such as identity theft. The Ponemon study clearly shows that when data breaches occur, the collateral damage of a company’s brand and reputation become significant hard costs that must be factored into the total financial loss. Download the Ponemon Reputation Impact Study

By: Mike Horrocks Earlier this week, my wife and I were discussing the dinner plans for Thanksgiving. The yams, cranberries, and pumpkin pies were purchased and the secret family recipes were pulled out of the cupboard. Everything was ready…we thought. Then the topic of the turkey was brought up. In the buzz of work, family, kids, etc., both of us had forgotten to get the turkey. We had each thought the other was covering this purchase and had scratched if off our respective lists. Our Thanksgiving dinner was at risk! This made me think of what best practices from our industry could be utilized if I was going to mitigate risks and pull off the perfect dinner. So I pulled the page from the Basel Committee on Banking Supervision that defines operational risk as "the risk of loss resulting from inadequate or failed internal processes, people, systems or external events” and I have some suggestions that I think work for both your Thanksgiving dinner and for your existing loan portfolios. First, let’s cover “inadequate or failed processes”. Clearly our shopping list process failed. But how are your portfolio management processes? Are they clearly documented and can they be implemented throughout the organization? Your processes should be as well communicated and documented as the “Smashed Yam Bake” recipe or you may be at risk. Next, let focus on the “people and systems”. People make mistakes – learn from them, correct them, and try to get the “systems” to make it so there are fewer mistakes. For example, I don’t want the risk of letting the turkey cook too long, so I use a remote meat thermometer. Ok, it is a little geeky; however the turkey has come out perfect every year. What systems do you have in place to make your quarterly reviews of the portfolio more consistent and up to your standards? Lastly, how do I mitigate those “external events”? Odds are I will be able to still get a turkey tonight. If not, I talked to a friend of mine who is a chef and I have the plans for a goose. How flexible are your operations and how accessible are you to the subject matter experts that can get you out of those situations? A solid risk management program takes into account unforeseen events and can make them into opportunities. So as the Horrocks family gathered in Norman Rockwell like fashion this Thanksgiving, a moment of thanks was given to the folks on the Basel committee. Likewise in your next risk review, I hope you can give thanks for the minimized losses and mitigated risks. Otherwise, we will have one thing very much in common…our goose will be cooked.

With the most recent guidance newly issued by the Federal Financial Institutions Examination Council (FFIEC) there is renewed conversation about knowledge based authentication. I think this is a good thing. It brings back into the forefront some of the things we have discussed for a while, like the difference between secret questions and dynamic knowledge based authentication, or the importance of risk based authentication. What does the new FFIEC guidance say about KBA? Acknowledging that many institutions use challenge questions, the FFIEC guidance highlights that the implementation of challenge questions can greatly impact efficacy of its usefulness. Chances are you already know this. Of greater importance, though, is the fact that the FFIEC guidelines caution on the use of less sophisticated systems and information that can be easily guessed or obtained from an Internet search, given the amount of information available. As mentioned above, the FFIEC guidelines call for questions that “do not rely on information that is often publicly available,” recommending instead a broad range of data assets on which to base questions. This is an area knowledge based authentication users should review carefully. At this point in time it is perfectly appropriate to ask, “Does my KBA provider rely on data that is publicly sourced” If you aren’t sure, ask for and review data sources. At a minimum, you want to look for the following in your KBA provider: · Questions! Diverse questions from broad data categories, including credit and noncredit assets · Consumer question performance as one of the elements within an overall risk-based decisioning policy · Robust performance monitoring. Monitor against established key performance indicators and do it often · Create a process to rotate questions and adjust access parameters and velocity limits. Keep fraudsters guessing! · Use the resources that are available to you. Experian has compiled information that you might find helpful: www.experian.com/ffiec Finally, I think the release of the new FFIEC guidelines may have made some people wonder if this is the end of KBA. I think the answer is a resounding “No.” Not only do the FFIEC guidelines support the continued use of knowledge based authentication, recent research suggests that KBA is the authentication tool identified as most effective by consumers. Where I would draw caution is when research doesn’t distinguish between “secret questions” and dynamic knowledge based authentication, which we all know is very different.

By: Mike Horrocks Have you ever been struck by a turtle or even better burnt by water skies that were on fire? If you are like me, these are not accidents that I think will ever happen to me and I'm not concerned that my family doctor didn't do a rotation in medical school to specialize in treating them. On October 1, 2013, however, doctors and hospitals across the U.S. will have ability to identify, log, bill, and track those accidents and thousands of other very specific medical events. In fact the list will jump from a current 18,000 medical codes to 140,000 medical codes. Some people hail this as a great step toward the management of all types of medical conditions, whereas others view it as a introduction of noise in a medical system already over burdened. What does this have to do with credit risk management you ask? When I look at the amount of financial and non-financial data that the credit industry has available to understand the risk of our consumer or business clients, I wonder where we are in the range of “take two aspirins and call me in the morning” to “[the accident] occurred inside a chicken coop” (code: Y9272). Are we only identifying a risky consumer after they have defaulted on a loan? Or are we trying to find a pattern in the consumer's purchases at a coffee house that would correlate with some other data point to indicate risk when the moon is full? The answer is somewhere in between and it will be different for each institution. Let’s start with what is known to be predictable when it comes to monitoring our portfolios - data and analytics, coupled with portfolio risk monitoring to minimize risk exposure - and then expand that over time. Click here for a recent case study that demonstrates this quite successfully with one of our clients. Next steps could include adding in analytics and/or triggers to identify certain risks more specifically. When it comes to risk, incorporating attributes or a solid set of triggers, for example, that will identify risk early on and can drill down to some of the specific events, combined with technology that streamlines portfolio management processes - whether you have an existing system in place or in search of a migration - will give you better insight to the risk profile of your consumers. Think about where your organization lies on the spectrum. If you are already monitoring your portfolio with some of these solutions, consider what the next logical step to improve the process is - is it more data, or advanced analytics using that data, a combination of both, or perhaps it's a better system in place to monitoring the risk more closely. Wherever you are, don’t let your institution have the financial equivalent need for these new medical codes W2202XA, W2202XD, and W2202XS (injuries resulting from walking into a lamppost once, twice, and sequentially).

Our guest blogger this week is Tom Bowers, Managing Director, Security Constructs LLC – a security architecture, data leakage prevention and global enterprise information consulting firm. The rash of large-scale data breaches in the news this year begs many questions, one of which is this: how do hackers select their victims? The answer: research. Hackers do their homework; in fact, an actual hack typically takes place only after many hours of first studying the target. Here’s an inside look at a hacker in action: Using search queries through such resources as Google and job sites, the hacker creates an initial map of the target’s vulnerabilities. For example, job sites can offer a wealth of information such as hardware and software platform usage, including specific versions and its use within the enterprise. The hacker fills out the map with a complete intelligence database on your company, perhaps using public sources such as government databases, financial filings and court records. Attackers want to understand such details as how much you spend on security each year, other breaches you’ve suffered, and whether you’re using LDAP or federated authentication systems. The hacker tries to identify the person in charge of your security efforts. As they research your Chief Security Officer or Chief Intelligence Security Officer (who they report to, conferences attended, talks given, media interviews, etc.) hackers can get a sense of whether this person is a political player or a security architect, and can infer the target’s philosophical stance on security and where they’re spending time and attention within the enterprise. Next, hackers look for business partners, strategic customers and suppliers used by the target. Sometimes it may be easier to attack a smaller business partner than the target itself. Once again, this information comes from basic search engine queries; attackers use job sites and corporate career sites to build a basic map of the target’s network. Once assembled, all of this information offers a list of potential and likely egress points within the target. While there is little you can do to prevent hackers from researching your company, you can reduce the threat this poses by conducting the same research yourself. Though the process is a bit tedious to learn, it is free to use; you are simply conducting competitive intelligence upon your own enterprise. By reviewing your own information, you can draw similar conclusions to the attackers, allowing you to strengthen those areas of your business that may be at risk. For example, if you want to understand which of your web portals may be exposed to hackers, use the following search term in Google: “site:yourcompanyname.com – www.yourcompanyname.com” This query specifies that you want to see everything on your site except WWW sites. Web portals do not typically start with WWW and this query will show “eportal.yourcompanyname, ecomm.yourcompanyname.” Portals are a great place to start as they usually contain associated user names and passwords; this means that a database is storing these credentials, which is a potential goldmine for attackers. You can set up a Google Alert to constantly watch for new portals; simply type in your query, select how often you want updates, and Google will send you an alert every time a new portal shows up in its results. Knowledge is power. The more you know about your own business, the better you can protect it from becoming prey to hacker-hawks circling in cyberspace. Download our free Data Breach Response Guide

By: Kari Michel The way medical debts are treated in scores may change with the introduction of June 2011, Medical Debt Responsibility Act. The Medical Debt Responsibility Act would require the three national credit bureaus to expunge medical collection records of $2,500 or less from files within 45 days of their being paid or settled. The bill is co-sponsored by Representative Heath Shuler (D-N.C.), Don Manzullo (R-Ill.) and Ralph M. Hall (R-Texas). As a general rule, expunging predictive information is not in the best interest of consumers or credit granters -- both of which benefit when credit reports and scores are as accurate and predictive as possible. If any type of debt information proven to be predictive is expunged, consumers risk exposure to improper credit products as they may appear to be more financially equipped to handle new debt than they truly are. Medical debts are never taken into consideration by VantageScore® Solutions LLC if the debt reporting is known to be from a medical facility. When a medical debt is outsourced to a third-party collection agency, it is treated the same as other debts that are in collection. Collection accounts of lower than $250, or ones that have been settled, have less impact on a consumer’s VantageScore® credit score. With or without the medical debt in collection information, the VantageScore® credit score model remains highly predictive.

With the raising of the U.S. debt ceiling and its recent ramifications consuming the headlines over the past month, I began to wonder what would happen if the general credit consumer had made a similar argument to their credit lender. Something along the lines of, “Can you please increase my credit line (although I am maxed out)? I promise to reduce my spending in the future!” While novel, probably not possible. In fact, just the opposite typically occurs when an individual begins to borrow up to their personal “debt ceiling.” When the amount of credit an individual utilizes to what is available to them increases above a certain percentage, it can adversely affect their credit score, in turn affecting their ability to secure additional credit. This percentage, known as the utility rate is one of several factors that are considered as part of an individual’s credit score calculation. For example, the utilization rate makes up approximately 23% of an individual’s calculated VantageScore® credit score. The good news is that consumers as a whole have been reducing their utilization rate on revolving credit products such as credit cards and home equity lines (HELOCs) to the lowest levels in over two years. Bankcard and HELOC utilization is down to 20.3% and 49.8%, respectively according to the Q2 2011 Experian – Oliver Wyman Market Intelligence Reports. In addition to lowering their utilization rate, consumers are also doing a better job of managing their current debt, resulting in multi-year lows for delinquency rates as mentioned in my previous blog post. By lowering their utilization and delinquency rates, consumers are viewed as less of a credit risk and become more attractive to lenders for offering new products and increasing credit limits. Perhaps the government could learn a lesson or two from today’s credit consumer.

As I’m sure you are aware, the Federal Financial Institutions Examination Council (FFIEC) recently released its, "Supplement to Authentication in an Internet Banking Environment" guiding financial institutions to mitigate risk using a variety of processes and technologies as part of a multi-layered approach. In light of this updated mandate, businesses need to move beyond simple challenge and response questions to more complex out-of-wallet authentication. Additionally, those incorporating device identification should look to more sophisticated technologies well beyond traditional IP address verification alone. Recently, I contribute to an article on how these new guidelines might affect your institution. Check it out here, in full: http://ffiec.bankinfosecurity.com/articles.php?art_id=3932 For more on what the FFIEC guidelines mean to you, check out these resources - which also gives you access to a recent Webinar.