Lately, I’ve been surprised by the emphasis that some fraud prevention practitioners still place on manual fraud reviews and treatment. With the market’s intense focus on real-time decisions and customer experience, it seems that fraud processing isn’t always keeping up with the trends.
I’ve been involved in several lively discussions on this topic. On one side of the argument sit the analytical experts who are incredibly good at distilling mountains of detailed information into the most accurate fraud risk prediction possible. Their work is intended to relieve users from the burden of scrutinizing all of that data. On the other side of the argument sits the human side of the debate. Their position is that only a human being is able to balance the complexity of judging risk with the sensitivity of handling a potential customer.
All of this has led me to consider the pros and cons of manual fraud reviews.
The Pros of Manual Review
When we consider the requirements for review, it certainly seems that there could be a strong case for using a manual process rather than artificial intelligence. Human beings can bring knowledge and experience that is outside of the data that an analytical decision can see. Knowing what type of product or service the customer is asking for and whether or not it’s attractive to criminals leaps to mind. Or perhaps the customer is part of a small community where they’re known to the institution through other types of relationships—like a credit union with a community- or employer-based field of membership. In cases like these, there are valuable insights that come from the reviewer’s knowledge of the world outside of the data that’s available for analytics.
The Cons of Manual Review
When we look at the cons of manual fraud review, there’s a lot to consider. First, the costs can be high. This goes beyond the dollars paid to people who handle the review to the good customers that are lost because of delays and friction that occurs as part of the review process. In a past webinar, we asked approximately 150 practitioners how often an application flagged for identity discrepancies resulted in that application being abandoned. Half of the audience indicated that more than 50% of those customers were lost. Another 30% didn’t know what the impact was. Those potentially good customers were lost because the manual review process took too long.
Additionally, the results are subjective. Two reviewers with different levels of skill and expertise could look at the same information and choose a different course of action or make a different decision. A single reviewer can be inconsistent, too—especially if they’re expected to meet productivity measures.
Finally, manual fraud review doesn’t support policy development. In another webinar earlier this year, a fraud prevention practitioner mentioned that her organization’s past reliance on manual review left them unable to review fraud cases and figure out how the criminals were able to succeed. Her organization simply couldn’t recreate the reviewer’s thought process and find the mistake that lead to a fraud loss.
To Review or Not to Review?
With compelling arguments on both sides, what is the best practice for manually reviewing cases of fraud risk? Hopefully, the following list will help:
DO: Get comfortable with what analytics tell you. Analytics divide events into groups that share a measurable level of fraud risk. Use the analytics to define different tiers of risk and assign each tier to a set of next steps. Start simple, breaking the accounts that need scrutiny into high, medium and low risk groups. Perhaps the high risk group includes one instance of fraud out of every five cases. Have a plan for how these will be handled. You might require additional identity documentation that would be hard for a criminal to falsify or some other action. Another group might include one instance in every 20 cases. A less burdensome treatment can be used here – like a one-time-passcode (OTP) sent to a confirmed mobile number. Any cases that remain unverified might then be asked for the same verification you used on the high-risk group.
DON’T: Rely on a single analytical score threshold or risk indicator to create one giant pile of work that has to be sorted out manually. This approach usually results in a poor experience for a large number of customers, and a strong possibility that the next steps are not aligned to the level of risk.
DO: Reserve manual review for situations where the reviewer can bring some new information or knowledge to the cases they review.
DON’T: Use the same underlying data that generated the analytics as the basis of a review. Consider two simplistic cases that use a new address with no past association to the individual. In one case, there are several other people with different surnames that have recently been using the same address. In the other, there are only two, and they share the same surname. In the best possible case, the reviewer recognizes how the other information affects the risk, and they duplicate what the analytics have already done – flagging the first application as suspicious. In other cases, connections will be missed, resulting in a costly mistake. In real situations, automated reviews are able to compare each piece of information to thousands of others, making it more likely that second-guessing the analytics using the same data will be problematic.
DO: Focus your most experienced and talented reviewers on creating fraud strategies. The best way to use their time and skill is to create a cycle where risk groups are defined (using analytics), a verification treatment is prescribed and used consistently, and the results are measured. With this approach, the outcome of every case is the result of deliberate action. When fraud occurs, it’s either because the case was miscategorized and received treatment that was too easy to discourage the criminal—or it was categorized correctly and the treatment wasn’t challenging enough.
While there is a middle ground where manual review and skill can be a force-multiplier for strong analytics, my sense is that many organizations aren’t getting the best value from their most talented fraud practitioners. To improve this, businesses can start by understanding how analytics can help group customers based on levels of risk—not just one group but a few—where the number of good vs. fraudulent cases are understood. Decide how you want to handle each of those groups and reserve challenging treatments for the riskiest groups while applying easier treatments when the number of good customers per fraud attempt is very high. Set up a consistent waterfall process where customers either successfully verify, cascade to a more challenging treatment, or abandon the process. Focus your manual efforts on monitoring the process you’ve put in place. Start collecting data that shows you how both good and bad cases flow through the process. Know what types of challenges the bad guys are outsmarting so you can route them to challenges that they won’t beat so easily. Most importantly, have a plan and be consistent.
Be sure to keep an eye out for a new post where we’ll talk about how this analytical approach can also help you grow your business.