All posts by Guest Contributor

Loading...

By: Heather Grover In past client and industry talks, I’ve discussed the increasing importance of retail branches to the growth strategy of the bank. Branches are the most utilized channel of the bank and they tend to be the primary tool for relationship expansion. Given the face-to-face nature, the branch historically has been viewed to be a relatively low-risk channel needing little (if any) identity verification – there are less uses of robust risk-based authentication or out of wallet questions. However, a now well-established fraud best practice is the process of doing proper identity verification and fraud prevention at the point of DDA account opening. In the current environment of declining credit application volumes and approval across the enterprise, there is an increased focus on organic growth through deposits.  Doing proper vetting during DDA account openings helps bring your retail process closer in line with the rest of your organization’s identity theft prevention program. It also provides assurance and confidence that the customer can now be cross-sold and up-sold to other products. A key industry challenge is that many of the current tools used in DDA are less mature than in other areas of the organization. We see few clients in retail that are using advanced fraud analytics or fraud models to minimize fraud – and even fewer clients are using them to automate manual processes - even though more than 90 percent of DDA accounts are opened manually. A relatively simple way to improve your branch operations is to streamline your existing ID verification and fraud prevention tool set: 1. Are you using separate tools to verify identity and minimize fraud? Many providers offer solutions that can do both, which can help minimize the number of steps required to process a new account; 2. Is the solution realtime? To the extent that you can provide your new account holders with an immediate and final decision, the less time and effort you’ll spend after they leave the branch finalizing the decision; 3. Does the solution provide detail data for manual review? This can help save valuable analyst time and provider costs by limiting the need to do additional searches. In my next post, we’ll discuss how fraud prevention in DDA impacts the customer experience.

Published: December 30, 2009 by Guest Contributor

By: Amanda Roth The final level of validation for your risk-based pricing program is to validate for profitability.  Not only will this analysis build on the two previous analyses, but it will factor in the cost of making a loan based on the risk associated with that applicant.  Many organizations do not complete this crucial step.  Therefore, they may have the applicants grouped together correctly, but still find themselves unprofitable. The premise of risk-based pricing is that we are pricing to cover the cost associated with an applicant.  If an applicant has a higher probability of delinquency, we can assume there will be additional collection costs, reporting costs, and servicing costs associated with keeping this applicant in good standing.  We must understand what these cost may be, though, before we can price accordingly.  Information of this type can be difficult to determine based on the resources available to your organization.  If you aren’t able to determine the exact amount of time and costs associated with the different loans at different risk levels, there are industry best practices that can be applied. Of primary importance is to factor in the cost to originate, service and terminate a loan based on varying risk levels.  This is the only true way to validate that your pricing program is working to provide profitability to your loan portfolio.  

Published: December 28, 2009 by Guest Contributor

--by Andrew Gulledge Intelligent use of features Question ordering: You want some degree of randomization in the questions that are included for each session. If a fraudster (posing as you) comes through Knowledge Based Authentication, for two or three sessions, wouldn’t you want them to answer new questions each time? At the same time, you want to try to use those questions that perform better more often. One way to achieve both is to group the questions into categories, and use a fixed category ordering (with the better-performing categories being higher up in the batting line up)—then, within each category, the question selection is randomized. This way, you can generally use the better questions more, but at the same time, make it difficult to come through Knowledge Based Authentication twice and get the same questions presented back to you. (You can also force all new questions in subsequent sessions, with a question exclusion strategy, but this can be restrictive and make the “failure to generate questions” rate spike.) Question weighting: Since we know some questions outperform others, both in terms of percentage correct and in terms of fraud separation, it is generally a good idea to weight the questions with points based on these performance metrics. Weighting can help to squeeze out some additional fraud detection from your Knowledge Based Authentication tool.  It also provides considerable flexibility in your decisioning (since it is no longer just “how many questions were answered correctly” but it is “what percentage of points were obtained”). Usage Limits: You should only allow a consumer to come through the Knowledge Based Authentication process a certain number of times before getting an auto-fail decision. This can take the form of x number of uses allowable within y number of hours/days/etc. Time out Limit: You should not allow fraudsters to research the questions in the middle of a Knowledge Based Authentication session. The real consumer should know the answers off the top of their heads. In a web environment, five minutes should be plenty of time to answer three to five questions. A call center environment should allow for more time since some people can be a bit chatty on the phone.  

Published: December 22, 2009 by Guest Contributor

By: Amanda Roth To refine your risk-based pricing another level, it is important to analyze where your tiers are set and determine if they are set appropriately.  (We find many of the regulators / examiners are looking for this next level of analysis.) This analysis begins with the results of the scoring model validation.  Not only will the distributions from that analysis determine if the score can predict between good and delinquent accounts, but it will also highlight which score ranges have similar delinquency rates, allowing you to group your tiers together appropriately.  After all, you do not want to have applicants with a 1 percent chance of delinquency priced the same as someone with an 8 percent chance of delinquency.  By reviewing the interval delinquency rates as well as the odds ratios, you should be able to determine where a significant enough difference occurs to warrant different pricing. You will increase the opportunity for portfolio profitability through this analysis, as you are reducing the likelihood that higher risk applicants are receiving lower pricing.  As expected, the overall risk management of the portfolio will increase when a proper risk-based pricing program is developed. In my next post we will look the final level of validation which does provide insight into pricing for profitability.  

Published: December 18, 2009 by Guest Contributor

By: Amanda Roth As discussed earlier, the validation of a risk based-pricing program can mean several different things. Let’s break these options down. The first option is to complete a validation of the scoring model being used to set the pricing for your program. This is the most basic validation of the program, and does not guarantee any insight on loan profitability expectations. A validation of this nature will help you to determine if the score being used is actually helping to determine the risk level of an applicant. This analysis is completed by using a snapshot of new booked loans received during a period of time usually 18–24 months prior to the current period. It is extremely important to view only the new booked loans taken during the time period and the score they received at the time of application. By maintaining this specific population only, you will ensure the analysis is truly indicative of the predictive nature of your score at the time you make the decision and apply the recommended risk-base pricing. By analyzing the distribution of good accounts vs. the delinquent accounts, you can determine if the score being used is truly able to separate these groups. Without acceptable separation, it would be difficult to make any decisions based on the score models, especially risk-based pricing. Although beneficial in determining whether you are using the appropriate scoring models for pricing, this analysis does not provide insight into whether your risk-based pricing program is set up correctly or not. Please join me next time to take a look at another option for this analysis.

Published: December 18, 2009 by Guest Contributor

By: Kari Michel   Lenders are looking for ways to improve their collections strategy as they continue to deal with unprecedented consumer debt, significant increases in delinquency, charge-off rates and unemployment and, declining collectability on accounts. Improve collections To maximize recovered dollars while minimizing collections costs and resources, new collections strategies are a must. The standard assembly line “bucket” approach to collection treatment no longer works because lenders can not afford the inefficiencies and costs of working each account equally without any intelligence around likelihood of recovery. Using a segmentation approach helps control spend and reduces labor costs to maximize the dollars collected. Credit based data can be utilized in decision trees to create segments that can be used with or without collection models. For example, below is a portion of a full decision tree that shows the separation in the liquidation rates by applying an attribute to a recovery score This entire segment has an average of 21.91 percent liquidation rate. The attribute applied to this score segment is the aggregated available credit on open bank card trades updated within 12 months. By using just this one attribute for this score band, we can see that the liquidation rates range from 11 to 35 percent. Additional attributes can be applied to grow the tree to isolate additional pockets of customers that are more recoverable, and identify segments that are not likely to be recovered. From a fully-developed segmentation analysis, appropriate collections strategies can be determined to prioritize those accounts that are most likely to pay, creating new efficiencies within existing collection strategies to help improve collections.

Published: December 17, 2009 by Guest Contributor

--by Andrew Gulledge General configuration issues Question selection- In addition to choosing questions that generally have a high percentage correct and fraud separation, consider any questions that would clearly not be a fit to your consumer population. Don’t get too trigger-happy, however, or you’ll have a spike in your “failure to generate questions” rate. Number of questions- Many people use three or four out-of-wallet questions in a Knowledge Based Authentication session, but some use more or less than that, based on their business needs. In general, more questions will provide a stricter authentication session, but might detract from the customer experience. They may also create longer handling times in a call center environment. Furthermore, it is harder to generate a lot of questions for some consumers, including thin-file types. Fewer Knowledge Based Authentication questions can be less invasive for the consumer, but limits the fraud detection value of the KBA process. Multiple choice- One advantage of this answer format is that it relies on recognition memory rather than recall memory, which is easier for the consumer. Another advantage is that it generally prevents complications associated with minor numerical errors, typos, date formatting errors and text scrubbing requirements. A disadvantage of multiple-choice, however, is that it can make educated guessing (and potentially gaming) easier for fraudsters. Fill in the blank- This is a good fit for some KBA questions, but less so with others. A simple numeric answer works well with fill in the blank (some small variance can be allowed where appropriate), but longer text strings can present complications. While undoubtedly difficult for a fraudster to guess, for example, most consumers would not know the full, official and (correct spelling) of the name to which they pay their monthly auto payment. Numeric fill in the blank questions are also good candidates for KBA in an IVR environment, where consumers can use their phone’s keypad to enter the answers.  

Published: December 14, 2009 by Guest Contributor

I have already commented on “secret questions” as the root of all evil when considering tools to reduce identity theft and minimize fraud losses.  No, I’m not quite ready to jump off  that soapbox….not just yet, not when we’re deep into the season of holiday deals, steals and fraud.  The answers to secret questions are easily guessed, easily researched, or easily forgotten.  Is this the kind of security you want standing between your account and a fraudster during the busiest shopping time of the year? There is plenty of research demonstrating that fraud rates spike during the holiday season.  There is also plenty of research to demonstrate that fraudsters perpetrate account takeover by changing the pin, address, or e-mail address of an account – activities that could be considered risky behavior in decisioning strategies.  So, what is the best approach to identity theft red flags and fraud account management?  A risk based authentication approach, of course! Knowledge Based Authentication (KBA) provides strong authentication and can be a part of a multifactor authentication environment without a negative impact on the consumer experience, if the purpose is explained to the consumer.  Let’s say a fraudster is trying to change the pin or e-mail address of an account.  When one of these risky behaviors is initiated, a Knowledge Based Authentication session begins. To help minimize fraud, the action is prevented if the KBA session is failed.  Using this same logic, it is possible to apply a risk based authentication approach to overall account management at many points of the lifecycle: • Account funding • Account information change (pin, e-mail, address, etc.) • Transfers or wires • Requests for line/limit increase • Payments • Unusual account activity • Authentication before engaging with a fraud alert representative Depending on the risk management strategy, additional methods may be combined with KBA; such as IVR or out-of-band authentication, and follow-up contact via e-mail, telephone or postal mail.  Of course, all of this ties in with what we would consider to be a comprehensive Red Flag Rules program. Risk based authentication, as part of a fraud account management strategy, is one of the best ways we know to ensure that customers aren’t left singing, “On the first day of Christmas, the fraudster stole from me…”  

Published: December 7, 2009 by Guest Contributor

--by Andrew Gulledge Where does Knowledge Based Authentication fit into my decisioning strategy? Knowledge Based Authentication can fit into various parts of your authentication process. Some folks choose to put every consumer through KBA, while others only send their riskier transactions through the out-of-wallet questions. Some people use Knowledge Based Authentication to feed a manual review process, while others use a KBA failure as a hard-decline. Uses for KBA are as sundry and varied as the questions themselves. Decision Matrix- As discussed by prior bloggers, a well-engineered fraud score can provide considerable lift to any fraud risk strategy. When possible, it is a good idea to combine both score and questions into the decisioning process. This can be done with a matrixed approach—where you are more lenient on the questions if the applicant has a good fraud score, and more lenient on the score if the applicant did well on the questions. In a decision matrix, a set decision code is placed within various cells, based on fraud risk. Decision Overrides- These provide a nice complement to your standard fraud decisioning strategy. Different fraud solution vendors provide different indicators or flags with which decisioning rules can be created. For example, you might decide to fail a consumer who provides a social security number that is recorded as deceased. These rules can help to provide additional lift to the standard decisioning strategy, whether it is in addition to Knowledge Based Authentication questions alone, questions and score, etc. The overrides can be along the lines of both auto-pass and auto-fail.  

Published: December 7, 2009 by Guest Contributor

By: Wendy Greenawalt In my last blog on optimization we discussed how optimized strategies can improve collection strategies. In this blog, I would like to discuss how optimization can bring value to decisions related to mortgage delinquency/modification. Over the last few years mortgage lenders have seen a sharp increase in the number of mortgage account delinquencies and a dramatic change in consumer mortgage payment trends.   Specifically, lenders have seen a shift in consumer willingness from paying their mortgage obligation first, while allowing other debts to go delinquent. This shift in borrower behavior appears unlikely to change anytime soon, and therefore lenders must make smarter account management decisions for mortgage accounts. Adding to this issue, property values continue to decline in many areas and lenders must now identify if a consumer is a strategic defaulter, a candidate for loan modification, or a consumer affected by the economic downturn. Many loans that were modified at the beginning of the mortgage crisis have since become delinquent and have ultimately been foreclosed upon by the lender. Making optimizing decisions related to collection action for mortgage accounts is increasingly complex, but optimization can assist lenders in identifying the ideal consumer collection treatment. This is taking place while lenders considering organizational goals, such as minimizing losses and maximizing internal resources, are retaining the most valuable consumers. Optimizing decisions can assist with these difficult decisions by utilizing a mathematical algorithm that can assess all possible options available and select the ideal consumer decision based on organizational goals and constraints. This technology can be implemented into current optimizing decisioning processes, whether it is in real time or batch processing, and can provide substantial lift in prediction over business as usual techniques.    

Published: December 7, 2009 by Guest Contributor

By: Wendy Greenawalt Optimization has become a "buzz word" in the financial services marketplace, but some organizations still fail to realize all the possible business applications for optimization. As credit card lenders scramble to comply with the pending credit card legislation, optimization can be a quick and easily implemented solution that fits into current processes to ensure compliance with the new regulations. Optimizing decisions Specifically, lenders will now be under strict guidelines of when an APR can be changed on an existing account, and the specific circumstances under which the account must return to the original terms. Optimization can easily handle these constraints and identify which accounts should be modified based on historical account information and existing organizational policies. APR account changes can require a great deal of internal resources to implement and monitor for on-going performance. Implementing an optimized strategy tree within an existing account management strategy will allow an organization to easily identify consumer level decisions.  This can be accomplished while monitoring accounts through on-going batch processing. New delivery options are now available for lenders to receive optimized strategies for decisions related to: Account acquisition Customer management Collections Organizations who are not currently utilizing this technology within their  processes should investigate the new delivery options. Recent research suggests optimizing decisions can provide an improvement of 7-to-16 percent over current processes.  

Published: November 30, 2009 by Guest Contributor

In my last blog, I discussed the basic concept of a maturation curve, as illustrated below: Exhibit 1 In Exhibit 1, we examine different vintages beginning with those loans originated by year during Q2 2002 through Q2 2008. The purpose of the vintage analysis is to identify those vintages that have a steeper slope towards delinquency, which is also known as delinquency maturation curve. The X-axis represents a timeline in months, from month of origination.  Furthermore, the Y-axis represents the 90+ delinquency rate expressed as a percentage of balances in the portfolio.  Those vintage analyses that have a steeper slope have reached a normalized level of delinquency sooner, and could in fact, have a trend line suggesting that they overshoot the expected delinquency rate for the portfolio based upon credit quality standards. So how can you use a maturation curve as a useful portfolio management tool? As a consultant, I spend a lot of time with clients trying to understand issues, such as why their charge-offs are higher than plan (budget).  I also investigate whether the reason for the excess credit costs are related to collections effectiveness, collections strategy, collections efficiency, credit quality or a poorly conceived budget. I recall one such engagement, where different functional teams within the client’s organization were pointing fingers at each other because their budget evaporated. One look at their maturation curves and I had the answers I needed. I noticed that two vintages per year had maturation curves that were pointed due north, with a much steeper curve than all other months of the year. Why would only two months or vintages of originations each year be so different than all other vintage analyses in terms of performance? I went back to my career experiences in banking, where I worked for a large regional bank that ran marketing solicitations several times yearly. Each of these programs was targeted to prospects that, in most instances, were out-of-market, or in other words, outside of the bank’s branch footprint. Bingo! I got it! The client was soliciting new customers out of his market, and was likely getting adverse selection. While he targeted the “right” customers – those with credit scores and credit attributes within an acceptable range, the best of that targeted group was not interested in accepting their offer, because they did not do business with my client, and would prefer to do business with an in-market player. Meanwhile, the lower grade prospects were accepting the offers, because it was a better deal than they could get in-market. The result was adverse selection...and what I was staring at was the "smoking gun" I’d been looking for with these two-a-year vintages (vintage analysis) that reached the moon in terms of delinquency. That’s the value of building a maturation curve analysis – to identify specific vintages that have characteristics that are more adverse than others.  I also use the information to target those adverse populations and track the performance of specific treatment strategies aimed at containing losses on those segments. You might use this to identify which originations vintages of your home equity portfolio are most likely to migrate to higher levels of delinquency; then use credit bureau attributes to identify specific borrowers for an early lifecycle treatment strategy. As that beer commercial says – “brilliant!”  

Published: November 25, 2009 by Guest Contributor

--by Jeff Bernstein In the current economic environment, many lenders and issuers across the globe are struggling to manage the volume of caseloads coming into collections. The challenge is that as these new collection cases come into collections in early phases of delinquency, the borrower is already in distress, and the opportunity to have a good outcome is diminished. One of the real “hot” items on the list of emerging best practices and innovating changes in collections is the concept of early lifecycle treatment strategy. Essentially, what we are referring to is the treatment of current and non-delinquent borrowers who are exhibiting higher risk characteristics.  There are also those who are at-risk of future default at higher levels than average. The challenge is how to identify these customers for early intervention and triage in the collections strategy process. One often-overlooked tool is the use of maturation curves to identify vintages within a portfolio that is performing worse than average. A maturation curve identifies how long from origination until a vintage or segment of the portfolio reaches a normalized rate of delinquency. Let’s assume that you are launching a new credit product into the marketplace. You begin to book new loans under the program in the current month. Beyond that month, you monitor all new loans that were originated/booked during that initial time frame which we can identify as a “vintage” of the portfolio. Each month’s originations are a separate vintage or vintage analysis, and we can track the performance of each vintage over time. How many months will it take before the “portfolio” of loans booked in that initial month reach a normal level of delinquency based on these criteria: the credit quality of the portfolio and its borrowers, typical collections servicing, delinquency reporting standards, and factor of time?  The answer would certainly depend upon the aforementioned factors, and could be graphed as follows:   Exhibit 1        In Exhibit 1, we examine different vintages beginning with those loans originated during Q2 2002, and by year Q2 2008. The purpose of the analysis is to identify those vintages that have a steeper slope towards delinquency, which is also known as a delinquency maturation curve.  The X-axis represents a timeline in months, from month of origination.  Furthermore,, the Y-axis represents the 90+ delinquency rate expressed as a percentage of balances in the portfolio. Those vintages that have a steeper slope have reached a normalized level of delinquency sooner, and could in fact, have a trend line suggesting that they overshoot the expected delinquency rate for the portfolio based upon credit quality standards. So how do we use the maturation curve as a tool? In my next blog, I will discuss how to use maturation curves to identify trends across various portfolios.  I will also examine differentiate collections issues from originations or lifecycle risk management opportunities.    

Published: November 23, 2009 by Guest Contributor

In my last post I discussed the problem with confusing what I would call “real” Knowledge Based Authentication (KBA) with secret questions.   However, I don’t think that’s where the market focus should be.  Instead of looking at Knowledge Based Authentication (KBA) today, we should be looking toward the future, and the future starts with risk-based authentication. If you’re like most people, right about now you are wondering exactly what I mean by risk-based authentication.  How does it differ from Knowledge Based Authentication, and how we got from point A to point B? It is actually pretty simple.  Knowledge Based Authentication is one factor of a risk-based authentication fraud prevention strategy.  A risk- based authentication approach doesn’t rely on question/answers alone, but instead utilizes fraud models that include Knowledge Based Authentication performance as part of the fraud analytics to improve fraud detection performance.  With a risk-based authentication approach, decisioning strategies are more robust and should include many factors, including the results from scoring models. That isn’t to say that Knowledge Based Authentication isn’t an important part of a risk-based approach.  It is.  Knowledge Based Authentication is a necessity because it has gained consumer acceptance. Without some form of Knowledge Based Authentication, consumers question an organization’s commitment to security and data protection. Most importantly, consumers now view Knowledge Based Authentication as a tool for their protection; it has become a bellwether to consumers. As the bellwether, Knowledge Based Authentication has been the perfect vehicle to introduce new and more complex authentication methods to consumers, without them even knowing it.  KBA has allowed us to familiarize consumers with out-of-band authentication and IVR, and I have little doubt that it will be one of the tools to play a part in the introduction of voice biometrics to help prevent consumer fraud. Is it always appropriate to present questions to every consumer?  No, but that’s where a true risk-based approach comes into play.  Is Knowledge Based Authentication always a valuable component of a risk based authentication tool to minimize fraud losses as part of an overall approach to fraud best practices?  Absolutely; always. DING!  

Published: November 23, 2009 by Guest Contributor

By: Tom Hannagan Understanding RORAC and RAROC I was hoping someone would ask about these risk management terms…and someone did. The obvious answer is that the “A” and the “O” are reversed. But, there’s more to it than that. First, let’s see how the acronyms were derived. RORAC is Return on Risk-Adjusted Capital. RAROC is Risk-Adjusted Return on Capital. Both of these five-letter abbreviations are a step up from ROE. This is natural, I suppose, since ROE, meaning Return on Equity of course, is merely a three-letter profitability ratio. A serious breakthrough in risk management and profit performance measurement will have to move up to at least six initials in its abbreviation. Nonetheless, ROE is the jumping-off point towards both RORAC and RAROC. ROE is generally Net Income divided by Equity, and ROE has many advantages over Return on Assets (ROA), which is Net Income divided by Average Assets. I promise, really, no more new acronyms in this post. The calculations themselves are pretty easy. ROA tends to tell us how effectively an organization is generating general ledger earnings on its base of assets.  This used to be the most popular way of comparing banks to each other and for banks to monitor their own performance from period to period. Many bank executives in the U.S. still prefer to use ROA, although this tends to be those at smaller banks. ROE tends to tell us how effectively an organization is taking advantage of its base of equity, or risk-based capital. This has gained in popularity for several reasons and has become the preferred measure at medium and larger U.S. banks, and all international banks. One huge reason for the growing popularity of ROE is simply that it is not asset-dependent. ROE can be applied to any line of business or any product. You must have “assets” for ROA, since one cannot divide by zero. Hopefully your Equity account is always greater than zero. If not, well, lets just say it’s too late to read about this general topic. The flexibility of basing profitability measurement on contribution to Equity allows banks with differing asset structures to be compared to each other.  This also may apply even for banks to be compared to other types of businesses. The asset-independency of ROE can also allow a bank to compare internal product lines to each other. Perhaps most importantly, this permits looking at the comparative profitability of lines of business that are almost complete opposites, like lending versus deposit services. This includes risk-based pricing considerations. This would be difficult, if even possible, using ROA. ROE also tells us how effectively a bank (or any business) is using shareholders equity. Many observers prefer ROE, since equity represents the owners’ interest in the business. As we have all learned anew in the past two years, their equity investment is fully at-risk. Equity holders are paid last, compared to other sources of funds supporting the bank. Shareholders are the last in line if the going gets rough. So, equity capital tends to be the most expensive source of funds, carrying the largest risk premium of all funding options. Its successful deployment is critical to the profit performance, even the survival, of the bank. Indeed, capital deployment, or allocation, is the most important executive decision facing the leadership of any organization. So, why bother with RORAC or RAROC? In short, it is to take risks more fully into the process of risk management within the institution. ROA and ROE are somewhat risk-adjusted, but only on a point-in-time basis and only to the extent risks are already mitigated in the net interest margin and other general ledger numbers. The Net Income figure is risk-adjusted for mitigated (hedged) interest rate risk, for mitigated operational risk (insurance expenses) and for the expected risk within the cost of credit (loan loss provision). The big risk management elements missing in general ledger-based numbers include: market risk embedded in the balance sheet and not mitigated, credit risk costs associated with an economic downturn, unmitigated operational risk, and essentially all of the strategic risk (or business risk) associated with being a banking entity. Most of these risks are summed into a lump called Unexpected Loss (UL). Okay, so I fibbed about no more new acronyms. UL is covered by the Equity account, or the solvency of the bank becomes an issue. RORAC is Net Income divided by Allocated Capital. RORAC doesn’t add much risk-adjustment to the numerator, general ledger Net Income, but it can take into account the risk of unexpected loss. It does this, by moving beyond just book or average Equity, by allocating capital, or equity, differentially to various lines of business and even specific products and clients. This, in turn, makes it possible to move towards risk-based pricing at the relationship management level as well as portfolio risk management.  This equity, or capital, allocation should be based on the relative risk of unexpected loss for the different product groups. So, it’s a big step in the right direction if you want a profitability metric that goes beyond ROE in addressing risk. And, many of us do. RAROC is Risk-Adjusted Net Income divided by Allocated Capital. RAROC does add risk-adjustment to the numerator, general ledger Net Income, by taking into account the unmitigated market risk embedded in an asset or liability. RAROC, like RORAC, also takes into account the risk of unexpected loss by allocating capital, or equity, differentially to various lines of business and even specific products and clients. So, RAROC risk-adjusts both the Net Income in the numerator AND the allocated Equity in the denominator. It is a fully risk-adjusted metric or ratio of profitability and is an ultimate goal of modern risk management. So, RORAC is a big step in the right direction and RAROC would be the full step in management of risk. RORAC can be a useful step towards RAROC. RAROC takes ROE to a fully risk-adjusted metric that can be used at the entity level.  This  can also be broken down for any and all lines of business within the organization. Thence, it can be further broken down to the product level, the client relationship level, and summarized by lender portfolio or various market segments. This kind of measurement is invaluable for a highly leveraged business that is built on managing risk successfully as much as it is on operational or marketing prowess.

Published: November 19, 2009 by Guest Contributor

Subscribe to our blog

Enter your name and email for the latest updates.

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

Subscribe to our Experian Insights blog

Don't miss out on the latest industry trends and insights!
Subscribe