Tag: KS

Loading...

By: Kari Michel Credit risk models are used by almost every lender, and there are many choices to choose from including custom or generic models.  With so many choices how do you know what is best for your portfolio?  Custom models provide the strongest risk prediction and are developed using an organization’s own data.  For many organizations, custom models may not be an option due to the size of the portfolio (may be too small), lack of data including not enough bads, time constraints, and/or lack of resources. If a custom model is not an option for your organization, generic bureau scoring models are a very powerful alternative for predicting risk.  But how can you understand if your current scoring model is the best option for you? You may be using a generic model today and you hear about a new generic model, for example the VantageScore® credit score.   How do you determine if the new model is more predictive than your current model for your portfolio?  The best way to understand if the new model is more predictive is to do a head-to-head comparison – a validation.  A validation requires a sample of accounts from your portfolio including performance flags.  An archive is pulled from the credit reporting agency and both scores are calculated from the same time period and a performance chart is created to show the comparison. There are two key performance metrics that are used to determine the strength of the model.  The KS (Komogorov-Smirnov) is a statistical term that measures the maximum difference between the bad and good cumulative score distribution.  The KS range is from 0% to 100%, with the higher the KS the stronger the model.  The second measurement uses the bad capture rate in the bottom 5%, 10% or 15% of the score range. A stronger model will provide better risk prediction and allow an organization to make better risk decisions.  Overall, when stronger scoring models are used, organizations will be best prepared to decrease their bad rates and have a more profitable portfolio.  

Published: June 18, 2010 by Guest Contributor

By: Wendy Greenawalt In the last installment of my three part series dispelling credit attribute myths, we’ll discuss the myth that the lift achieved by utilizing new attributes is minimal, so it is not worth the effort of evaluating and/or implementing new credit attributes. First, evaluating accuracy and efficiency of credit attributes is hard to measure. Experian data experts are some of the best in the business and, in this edition, we will discuss some of the methods Experian uses to evaluate attribute performance. When considering any new attributes, the first method we use to validate statistical performance is to complete a statistical head-to-head comparison. This method incorporates the use of KS (Kolmogorov–Smirnov statistic), Gini coefficient, worst-scoring capture rate or odds ratio when comparing two samples. Once completed, we implement an established standard process to measure value from different outcomes in an automated and consistent format. While this process may be time and labor intensive, the reward can be found in the financial savings that can be obtained by identifying the right segments, including: • Risk models that better identify “bad” accounts and minimizing losses • Marketing models that improve targeting while maximizing campaign dollars spent • Collections models that enhance identification of recoverable accounts leading to more recovered dollars with lower fixed costs Credit attributes Recently, Experian conducted a similar exercise and found that an improvement of 2-to-22 percent in risk prediction can be achieved through the implementation of new attributes. When these metrics are applied to a portfolio where several hundred bad accounts are now captured, the resulting savings can add up quickly (500 accounts with average loss rate of $3,000 = $1.5M potential savings). These savings over time more than justify the cost of evaluating and implementing new credit attributes.  

Published: October 23, 2009 by Guest Contributor

By: Tracy Bremmer In our last blog (July 30), we covered the first three stages of model development which are necessary whether developing a custom or generic model.  We will now discuss the next three stages, beginning with the “baking” stage:  scorecard development. Scorecard development begins as segmentation analysis is taking place and any reject inference (if needed) is put into place. Considerations for scorecard development are whether the model will be binned (divides predictive attributes into intervals) or continuous (variable is modeled in its entirety), how to account for missing values (or “false zeros”), how to evaluate the validation sample (hold-out sample vs. an out-of-time sample), avoidance of over-fitting the model, and finally what statistics will be used to measure scorecard performance (KS, Gini coefficient, divergence, etc.). Many times lenders assume that once the scorecard is developed, the work is done.   However, the remaining two steps are critical to development and application of a predictive model:  implementation/documentation and scorecard monitoring.   Neglecting these two steps is like baking a cake but never taking a bite to make sure it tastes good. Implementation and documentation is the last stage in developing a model that can be put to use for enhanced decisioning. Where the model will be implemented will determine the timeliness and complexity for when the models can be put into practice. Models can be developed in an in-house system, a third-party processor, a credit reporting agency, etc. Accurate documentation outlining the specifications of the model will be critical for successful implementation and model audits. Scorecard monitoring will need to be put into place once the model is developed, implemented and put into use. Scorecard monitoring evaluates population stability, scorecard performance, and decision management to ensure that the model is performing as expected over the course of time. If at any time there are variations based on initial expectations, then scorecard monitoring allows for immediate modifications to strategies. With all the right ingredients, the right approach, and the checks and balances in place, your model development process has the potential to come out “just right!”  

Published: August 4, 2009 by Guest Contributor

Subscribe to our blog

Enter your name and email for the latest updates.

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

Subscribe to our Experian Insights blog

Don't miss out on the latest industry trends and insights!
Subscribe