Loading...

The Pros and Cons of Manual Fraud Reviews

by Chris Ryan 6 min read July 28, 2021

Lately, I’ve been surprised by the emphasis that some fraud prevention practitioners still place on manual fraud reviews and treatment. With the market’s intense focus on real-time decisions and customer experience, it seems that fraud processing isn’t always keeping up with the trends.

I’ve been involved in several lively discussions on this topic. On one side of the argument sit the analytical experts who are incredibly good at distilling mountains of detailed information into the most accurate fraud risk prediction possible. Their work is intended to relieve users from the burden of scrutinizing all of that data. On the other side of the argument sits the human side of the debate. Their position is that only a human being is able to balance the complexity of judging risk with the sensitivity of handling a potential customer.

All of this has led me to consider the pros and cons of manual fraud reviews.

The Pros of Manual Review

When we consider the requirements for review, it certainly seems that there could be a strong case for using a manual process rather than artificial intelligence. Human beings can bring knowledge and experience that is outside of the data that an analytical decision can see. Knowing what type of product or service the customer is asking for and whether or not it’s attractive to criminals leaps to mind. Or perhaps the customer is part of a small community where they’re known to the institution through other types of relationships—like a credit union with a community- or employer-based field of membership. In cases like these, there are valuable insights that come from the reviewer’s knowledge of the world outside of the data that’s available for analytics.

The Cons of Manual Review

When we look at the cons of manual fraud review, there’s a lot to consider. First, the costs can be high. This goes beyond the dollars paid to people who handle the review to the good customers that are lost because of delays and friction that occurs as part of the review process. In a past webinar, we asked approximately 150 practitioners how often an application flagged for identity discrepancies resulted in that application being abandoned. Half of the audience indicated that more than 50% of those customers were lost. Another 30% didn’t know what the impact was. Those potentially good customers were lost because the manual review process took too long.

Additionally, the results are subjective. Two reviewers with different levels of skill and expertise could look at the same information and choose a different course of action or make a different decision. A single reviewer can be inconsistent, too—especially if they’re expected to meet productivity measures.

Finally, manual fraud review doesn’t support policy development. In another webinar earlier this year, a fraud prevention practitioner mentioned that her organization’s past reliance on manual review left them unable to review fraud cases and figure out how the criminals were able to succeed. Her organization simply couldn’t recreate the reviewer’s thought process and find the mistake that lead to a fraud loss.

To Review or Not to Review?

With compelling arguments on both sides, what is the best practice for manually reviewing cases of fraud risk? Hopefully, the following list will help:

DO: Get comfortable with what analytics tell you. Analytics divide events into groups that share a measurable level of fraud risk. Use the analytics to define different tiers of risk and assign each tier to a set of next steps. Start simple, breaking the accounts that need scrutiny into high, medium and low risk groups. Perhaps the high risk group includes one instance of fraud out of every five cases. Have a plan for how these will be handled. You might require additional identity documentation that would be hard for a criminal to falsify or some other action. Another group might include one instance in every 20 cases. A less burdensome treatment can be used here – like a one-time-passcode (OTP) sent to a confirmed mobile number. Any cases that remain unverified might then be asked for the same verification you used on the high-risk group.

DON’T: Rely on a single analytical score threshold or risk indicator to create one giant pile of work that has to be sorted out manually. This approach usually results in a poor experience for a large number of customers, and a strong possibility that the next steps are not aligned to the level of risk.

DO: Reserve manual review for situations where the reviewer can bring some new information or knowledge to the cases they review.

DON’T: Use the same underlying data that generated the analytics as the basis of a review. Consider two simplistic cases that use a new address with no past association to the individual. In one case, there are several other people with different surnames that have recently been using the same address. In the other, there are only two, and they share the same surname. In the best possible case, the reviewer recognizes how the other information affects the risk, and they duplicate what the analytics have already done – flagging the first application as suspicious. In other cases, connections will be missed, resulting in a costly mistake. In real situations, automated reviews are able to compare each piece of information to thousands of others, making it more likely that second-guessing the analytics using the same data will be problematic.

DO: Focus your most experienced and talented reviewers on creating fraud strategies. The best way to use their time and skill is to create a cycle where risk groups are defined (using analytics), a verification treatment is prescribed and used consistently, and the results are measured. With this approach, the outcome of every case is the result of deliberate action. When fraud occurs, it’s either because the case was miscategorized and received treatment that was too easy to discourage the criminal—or it was categorized correctly and the treatment wasn’t challenging enough.

Gaining Value

While there is a middle ground where manual review and skill can be a force-multiplier for strong analytics, my sense is that many organizations aren’t getting the best value from their most talented fraud practitioners. To improve this, businesses can start by understanding how analytics can help group customers based on levels of risk—not just one group but a few—where the number of good vs. fraudulent cases are understood. Decide how you want to handle each of those groups and reserve challenging treatments for the riskiest groups while applying easier treatments when the number of good customers per fraud attempt is very high. Set up a consistent waterfall process where customers either successfully verify, cascade to a more challenging treatment, or abandon the process. Focus your manual efforts on monitoring the process you’ve put in place. Start collecting data that shows you how both good and bad cases flow through the process. Know what types of challenges the bad guys are outsmarting so you can route them to challenges that they won’t beat so easily. Most importantly, have a plan and be consistent.

Be sure to keep an eye out for a new post where we’ll talk about how this analytical approach can also help you grow your business.

Contact us

Related Posts

Model inventories are rapidly expanding. AI-enabled tools are entering workflows that were once deterministic and decisioning environments are more interconnected than ever. At the same time, regulatory scrutiny around model risk management continues to intensify. In many institutions, classification determines validation depth, monitoring intensity, and escalation pathways while informing board reporting. If classification is wrong, every downstream control is misaligned. And, in 2026, model classification is no longer just about assigning a tier, but rather about understanding data lineage, use case evolution, interdependencies, and governance accountability in a decentralized, AI-driven environment. We recently spoke with Mark Longman, Director of Analytics and Regulatory Technology, and here are some of his thoughts around five blind spots risk and compliance leaders should consider addressing now. 1. The “Set It and Forget It” Mentality The Blind Spot Model classification frameworks are often designed during a regulatory remediation effort or inventory modernization initiative. Once documented and approved, they can remain largely unchanged for years. However, model risk management is an ongoing process. “There’s really no sort of one and done when it comes to model risk management,” said Longman. Why It Matters Classification is not merely descriptive, it’s prescriptive. It drives the depth of validation, the frequency of monitoring, the intensity of governance oversight and the level of senior management visibility. As Longman notes, data fragmentation is compounding the challenge. “There’s data everywhere – internal, cloud, even shadow IT – and it’s tough to get a clear view into the inputs into the models,” he said. When inputs are unclear, tiering becomes inherently subjective and if classification frameworks are not reviewed regularly, governance intensity can become misaligned with real exposure. Therefore, static classification is a growing risk, especially in a world of rapidly expanding AI use cases. In a supervisory environment that continues to scrutinize model definitions, particularly as AI tools proliferate, a dynamic, periodically refreshed classification process can demonstrate institutional vigilance. 2. Assuming Third-Party Models Reduce Governance Accountability The Blind SpotThere is often an implicit belief that vendor-provided models carry less governance burden because they were developed externally. Why It Matters Vendor provided models continue to grow, particularly in AI-driven solutions, but supervisory expectations remain firm. “Third-party models do not diminish the responsibility of the institution for its governance and oversight of the model – whether it’s monitoring, ongoing validation, just evaluating drift model documentation,” Longman said. “The board and senior managers are responsible to make sure that these models are performing as expected and that includes third-party models.” Regulators consistently emphasize that institutions remain responsible for the outcomes produced by models used in their decisioning environments, regardless of origin. If a vendor model influences credit approvals, pricing, fraud decisions, or capital calculations, it directly affects customers, financial performance and compliance exposure. Treating third-party models as inherently lower risk can also distort internal tiering frameworks. When vendor models are under-classified, validation depth and monitoring rigor may be insufficient relative to their true impact. 3. Limited Situational Awareness of Model Interdependencies The Blind Spotfeed multiple downstream models simultaneously. Why It Matters Risk often flows across interdependencies. When upstream models degrade in performance or introduce bias, downstream models inherit that exposure. If multiple material decisions depend on the same data transformation or feature engineering process, concentration risk emerges. Without visibility into these dependencies, tiering assessments may underestimate cumulative risk, and monitoring frameworks may fail to detect systemic vulnerabilities. “There has to be a holistic view of what models are being used for – and really somebody to ensure there’s not that overlap across models,” Longman said. Supervisors are increasingly interested in understanding how model risk propagates through business processes. When institutions cannot articulate how models interact, it raises broader concerns about situational awareness and control effectiveness. Therefore, capturing interdependencies within the classification framework enhances more than documentation. It enables more accurate tiering, more targeted monitoring and more informed governance oversight. 4. Excluding Models Without Defensible Rationale The Blind SpotGray-area tools frequently sit outside formal inventories: rule-based engines, spreadsheet models, scenario calculators, heuristic decision aids, or emerging AI tools used for analysis and summarization. These tools may not neatly fit legacy definitions of a “model,” and so they are sometimes excluded without robust documentation. Why It Matters Regulatory definitions of “model” have broadened over time. What creates risk is the absence of defensible reasoning and documentation. Longman describes the risk clearly: “Some [teams] are deploying AI solutions that are sort of unbeknownst to the model risk management community – and almost creating what you might think of as a shadow model inventory.” Without visibility, institutions cannot confidently characterize use, trace inputs, or assign appropriate tiers, according to Longman. It also undermines the credibility of the official inventory during examinations. A well-governed program can articulate why certain tools fall outside model risk management scope, referencing documented criteria aligned with regulatory guidance. Without that evidence, exclusions can appear arbitrary, suggesting gaps in oversight. 5. Inconsistent or Subjective Classification Frameworks The Blind SpotAs inventories scale and governance teams expand, classification decisions are often distributed across reviewers. Over time, discrepancies can emerge. Why It Matters Inconsistency undermines both risk management and regulatory confidence. If two models with comparable use cases and impact profiles are assigned different tiers without clear justification, it signals that the framework is not being applied uniformly. AI adds even more complexity. When it comes to emerging AI model governance versus traditional model governance, there’s a lot to unpack, says Longman: “The AI models themselves are a lot more complicated than your traditional logistic or multiple regression models. The data, the prompting, you need to monitor the prompts that the LLMs for example are responding to and you need to make sure you can have what you may think of as prompt drift,” Longman said. As frameworks evolve, particularly to incorporate AI, automation, and new regulatory interpretations, institutions must ensure that changes are cascaded across the entire inventory. Partial updates or selective reclassification introduce fragmentation. Longman recommends formalizing classification through a structured decision tree embedded in policy to ensure consistent outcomes across business units. Beyond clear documentation, a strong classification program is applied consistently, measured objectively, and periodically reassessed across the full portfolio. BONUS – 6. Elevating Classification with Data-Level Visibility Some institutions are extending classification discipline beyond models to the data layer itself. Longman describes organizations that maintain not only a model inventory, but a data inventory, mapping variables to the models they influence. This approach allows institutions to quickly assess downstream effects when operational or environmental changes occur including system updates or even natural disasters affecting payment behavior. In an AI-driven environment, traceability may become a competitive differentiator. Conclusion Model classification is foundational. It determines how risk is measured, monitored, escalated, and reported. In a rapidly evolving regulatory and technological environment, it cannot remain static. Institutions that invest now in transparency, consistency, and data-level visibility will not only reduce supervisory friction – they will build a governance framework capable of supporting the next generation of AI-enabled decisioning. Learn more

by Stefani Wendel 6 min read March 20, 2026

Fraud is evolving faster than ever, driven by digitalization, real-time payments and increasingly sophisticated scams. For Warren Jones and his team at Santander Bank, staying ahead requires more than tools. It requires the right partner. The partnership with Santander Bank began nearly a decade ago, during a period of rapid change in the fraud and banking landscape. Since then, the relationship has grown into a long-term collaboration focused on continuous improvement and innovation. Experian products helped Santander address one of its most pressing operational challenges: a high-volume manual review queue for new account applications. While the vast majority of alerts in the queue were fraudulent and ultimately declined, a small percentage represented legitimate customers whose account openings were delayed. This created inefficiencies for staff and a poor first impression of genuine applicants. We worked alongside Santander to tackle this challenge head-on, transforming how applications were reviewed, how fraud was detected and how legitimate customers were approved. In addition to fraud prevention, implementing Experian's Ascend PlatformTM, with its intuitive user experience and robust data environment, has unlocked additional value across the organization. The platform supports multiple use cases, enabling collaboration between fraud and marketing teams to align strategies based on actionable insights. Learn more about our Ascend Platform

by Zohreen Ismail 6 min read February 18, 2026

For lenders, the job has never been more complex. You’re expected to protect portfolio performance, meet regulatory expectations, and support growth, all while fraud tactics evolve faster than many traditional risk frameworks were designed to handle. One of the biggest challenges of the job? The line between credit loss and fraud loss is increasingly blurred, and misclassified losses can quietly distort portfolio performance. First-party fraud can look like standard credit risk on the surface and synthetic identity fraud can be difficult to identify, allowing both to quietly slip through decisioning models and distort portfolio performance. That’s where fraud risk scores come into play. Used correctly, they don’t replace credit models; they strengthen them. And for credit risk teams under pressure to approve more genuine customers without absorbing unnecessary losses, understanding how fraud risk scores fit into modern decisioning has become essential. What is a fraud risk score (and what isn’t it) At its core, a fraud risk score is designed to assess the likelihood that an applicant or account is associated with fraudulent behavior, not simply whether they can repay credit. That distinction matters. Traditional credit scores evaluate ability to repay based on historical financial behavior. Fraud risk scores focus on intent and risk signals, patterns that suggest an individual may never intend to repay, may be manipulating identity data, or may be building toward coordinated abuse. Fraud risk scores are not: A replacement for credit scoring A blunt tool designed to decline more applicants A one-time checkpoint limited to account opening Instead, they provide an additional lens that helps credit risk teams separate true credit risk from fraud that merely looks like credit loss. How fraud scores augment decisioning Credit models were never built to detect fraud masquerading as legitimate borrowing behavior. Consider common fraud scenarios facing lenders today: First-payment default, where an applicant appears creditworthy but never intends to make an initial payment Bust-out fraud, where an individual builds a strong credit profile over time, then rapidly maxes out available credit before disappearing Synthetic identity fraud, where criminals blend real and fabricated data to create identities that mature slowly and evade traditional checks In all three cases, the applicant may meet credit criteria at the point of decision. Losses can get classified as charge-offs rather than fraud, masking the real source of portfolio degradation. When credit risk teams rely solely on traditional models, the result is often an overly conservative response: tighter credit standards, fewer approvals, and missed growth opportunities. How fraud risk scores complement traditional credit decisioning Fraud risk scores work best when they augment credit decisioning. For credit risk officers, the value lies in precision. Fraud risk scores help identify applicants or accounts where behavior, velocity or identity signals indicate elevated fraud risk — even when credit attributes appear acceptable. When integrated into decisioning strategies, fraud risk scores can: Improve confidence in approvals by isolating high-risk intent early Enable adverse-actionable decisions for first-party fraud, supporting compliance requirements Reduce misclassified credit losses by clearly identifying fraud-driven outcomes Support differentiated treatment strategies rather than blanket declines The goal isn’t to approve fewer customers. It’s to approve the right customers and to decline or treat risk where intent doesn’t align with genuine borrowing behavior. Fraud risk across the credit lifecycle One of the most important shifts for credit risk teams is recognizing that fraud risk is not static. Fraud risk scores can deliver value at multiple stages of the credit lifecycle: Marketing and prescreen: Fraud risk insights help suppress high-risk identities before offers are extended, ensuring marketing dollars are maximized by targeting low risk consumers. Account opening and originations: Real-time fraud risk scoring supports early detection of first-party fraud, synthetic identities, and identity misuse — before losses are booked. Prequalification and instant decisioning: Fraud risk scores can be used to exclude high-risk applicants from offers while maintaining speed and customer experience. Account management and portfolio review: Fraud risk doesn’t end after onboarding. Scores applied in batch or review processes help identify accounts trending toward bust-out behavior or coordinated abuse, informing credit line management and treatment strategies. This lifecycle approach reflects a broader shift: fraud prevention is no longer confined to front-end controls — it’s a continuous risk discipline. What credit risk officers should look for in a fraud risk score Not all fraud risk scores are created equal. When evaluating or deploying them, credit risk officers should prioritize: Lifecycle availability, so fraud risk can be assessed beyond originations Clear distinction between intent and ability to repay, especially for first-party fraud Adverse-action readiness, including explainability and reason codes Regulatory alignment, supporting fair lending and compliance requirements Seamless integration alongside existing credit and decisioning frameworks Increasingly, credit risk teams also value platforms that reduce operational complexity by enabling fraud and credit risk assessment through unified workflows rather than fragmented point solutions. A more strategic approach to fraud and credit risk The most effective credit risk strategies today are not more conservative, they’re more precise. Fraud risk scores give credit risk officers the ability to stop fraud earlier, classify losses accurately and protect portfolio performance without tightening credit across the board. When fraud and credit insights work together, teams can gain a clearer view of risk, stronger decision confidence and more flexibility to support growth. As fraud tactics continue to evolve, the organizations that succeed will be those that can effectively separate fraud from credit loss. Fraud risk scores are no longer a nice-to-have. They’re a foundational tool for modern credit risk strategies. How credit risk teams can operationalize fraud risk scores For credit risk officers, the challenge isn’t just understanding fraud risk, it’s operationalizing it across the credit lifecycle without adding friction, complexity or compliance risk. Rather than treating fraud as a point-in-time decision, credit risk teams should assess fraud risk where it matters most, from acquisition through portfolio management. Fraud risk scores are designed to complement credit decisioning by focusing on intent to repay, helping teams distinguish fraud-driven behavior from traditional credit risk. Key ways Experian supports credit risk teams include: Lifecycle coverage: Experian award-winning fraud risk scores are available across marketing, originations, prequalification, instant decisioning and ongoing account review. This allows organizations to apply consistent fraud strategies beyond account opening. First-party and synthetic identity fraud intelligence: Experian’s fraud risk scoring addresses first-payment default, bust-out behavior and synthetic identity fraud, which are scenarios that often bypass traditional credit models because they initially appear creditworthy. Converged fraud and credit decisioning: By delivering fraud and credit insights together, often through a single integration, Experian can help reduce operational complexity. Credit risk teams can assess fraud and credit risk simultaneously rather than managing disconnected tools and workflows. Precision over conservatism: The emphasis is not on declining more applicants, but on approving more genuine customers by isolating high-risk intent earlier. This precision helps protect portfolio performance without sacrificing growth. For lenders navigating increasing fraud pressure, Experian’s approach reflects a broader shift in the industry: fraud prevention and credit risk management are no longer separate disciplines; they are most effective when aligned. Explore our fraud solutions Contact us

by Julie Lee 6 min read February 18, 2026

Subscribe to our thought leadership

Enter your name and email for the latest updates.

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

Subscribe to our thought leadership

Don't miss out on the latest industry trends and insights!
Subscribe