Loading...

Why risk-based authentication…and what is it, for that matter?

by Keir Breitenfeld 2 min read September 24, 2009

The term “risk-based authentication” means many things to many institutions.  Some use the term to review to their processes; others, to their various service providers.  I’d like to establish the working definition of risk-based authentication for this discussion calling it:  “Holistic assessment of a consumer and transaction with the end goal of applying the right authentication and decisioning treatment at the right time.”

Now, that “holistic assessment” thing is certainly where the rubber meets the road, right?

One can arguably approach risk-based authentication from two directions.  First, a risk assessment can be based upon the type of products or services potentially being accessed and/or utilized (example: line of credit) by a customer.  Second, a risk assessment can be based upon the authentication profile of the customer (example: ability to verify identifying information).  I would argue that both approaches have merit, and that a best practice is to merge both into a process that looks at each customer and transaction as unique and therefore worthy of  distinctively defined treatment.

In this posting, and in speaking as a provider of consumer and commercial authentication products and services, I want to first define four key elements of a well-balanced risk based authentication tool: data, detailed and granular results, analytics, and decisioning.

1.  Data: Broad-reaching and accurately reported data assets that span multiple sources providing far reaching and comprehensive opportunities to positively verify consumer identities and identity elements.

2.  Detailed and granular results: Authentication summary and detailed-level outcomes that portray the amount of verification achieved across identity elements (such as name, address, Social Security number, date of birth, and phone) deliver a breadth of information and allow positive reconciliation of high-risk fraud and/or compliance conditions.  Specific results can be used in manual or automated decisioning policies as well as scoring models,

3.  Analytics:  Scoring models designed to consistently reflect overall confidence in consumer authentication as well as fraud-risk associated with identity theft, synthetic identities, and first party fraud.  This allows institutions to establish consistent and objective score-driven policies to authenticate consumers and reconcile high-risk conditions.  Use of scores also reduces false positive ratios associated with single or grouped binary rules.  Additionally, scores provide internal and external examiners with a measurable tool for incorporation into both written and operational fraud and compliance programs,

4.  Decisioning: Flexibly defined data and operationally-driven decisioning strategies that can be applied to the gathering, authentication, and level of acceptance or denial of consumer identity information.  This affords institutions an opportunity to employ consistent policies for detecting high-risk conditions, reconcile those terms that can be changed, and ultimately determine the response to consumer authentication results – whether it be acceptance, denial of business or somewhere in between (e.g., further authentication treatments).

In my next posting, I’ll talk more specifically about the value propositions of risk-based authentication, and identify some best practices to keep in mind.

Related Posts

Lending hasn’t slowed down—but many decisioning processes have. Applications are coming in faster. Fraud is becoming more sophisticated. Borrowers expect near-instant responses. And yet, inside many organizations, decisions are still being made across fragmented systems, manual reviews, and rigid strategies that weren’t designed and aren’t optimized for today’s environment. That broadening gap isn’t just an operational issue but often stems from a lack of innovation as well. And it’s quietly costing lenders growth, efficiency, and competitive position. When decisioning falls behind, some symptoms are easy to recognize, like applications taking days to process, teams overloaded with manual reviews, and credit and fraud decisions happening in separate platforms. Others are not as obvious, but arguably more impactful, slipping bottom lines and fraud and therefore losses lurking in lenders’ portfolios. The root issue is a fragmented infrastructure. Experian has reported that while 79% of financial institutions surveyed globally want fewer vendors or more unified approaches, they typically use eight or more tools across credit, fraud and compliance. As most decisioning environments cannot integrate data, adapt strategies, and execute decisions in real time, lenders often have to make tradeoffs. Speed vs. accuracy; growth vs. risk; and automation vs. control are just some. Meanwhile, the market has moved on. Leading lenders are no longer optimizing individual steps. They’re rethinking decisioning as a connected, intelligent system. Gaps forming from status quo in 8 key decision areas Across the lending lifecycle, there are eight critical moments where decisioning can either accelerate growth or create friction. Pre-qualification: Pre-qualification should expand your funnel with confidence. But limited data access and static criteria often result in overly conservative targeting or missed opportunities. Additionally, the delay in acting on a pre-qualification funnel highlights a key area for opportunity among many lenders. Instant credit decisions: Customers expect real-time outcomes. When decisions rely on manual intervention or fragmented inputs, speed and conversions suffer. Prescreen and targeting: Disconnected data and rigid segmentation can lead to poorly aligned offers, reducing response rates and wasting acquisition spend. Credit line management: Without dynamic strategies, credit lines may be too restrictive (limiting growth) or too aggressive (increasing risk). Early delinquency management: Missed early signals and delayed interventions make it harder to prevent accounts from deteriorating. Mid- and late-stage delinquency: Strategies that don’t adapt to evolving borrower behavior reduce recovery effectiveness and increase losses. Collections and recovery: Manual, one-size-fits-all approaches limit recovery rates and increase operational cost. Ongoing strategy optimization: Perhaps the most overlooked gap: many lenders lack the ability to continuously test, learn, and refine decision strategies as conditions change. What these gaps are really costing you Individually, each of these breakdowns may seem manageable. Together, they can create systemic drag on performance. That shows up in four critical ways: Missed growth opportunities: Good borrowers are declined, abandoned, or never targeted in the first place. Credit offers fail to align with actual borrower potential. Higher operational costs: Manual reviews and disconnected workflows consume time and resources that could be spent on higher-value work. Increased fraud exposure and friction: Fraud is proliferating and becoming more expensive to manage. The Federal Trade Commission reported $12.5B were lost to fraud in the U.S. in 2024, a 25% increase over the prior year. For many financial institutions, the first reaction is often to add more steps to the decisioning process, which can impact good borrowers. Increased competitive pressure: Fintechs and modern lenders are focused on delivering faster, more personalized experiences, capturing share while traditional processes lag behind. 80% of banks and credit unions plan to increase their technology spending in 2026, yet many continue to fall short on planned system deployments, according to Cornerstone Advisors’ annual “What’s Going On in Banking” research report. What innovative decisioning leaders are doing differently Leading lenders are changing how decisions are made, creating a competitive advantage. Instead of stitching together point solutions, they’re adopting a more integrated approach that brings together: Comprehensive data – including both credit and fraud insights Optimized decision strategies – designed to balance growth and risk Real-time execution – enabling faster, more consistent outcomes Continuous optimization – adapting to changing market conditions Strategic partnerships – leveraging third-party industry expertise to augment their own This shift eliminates the need for tradeoffs and instead allows lenders to increase approvals while maintaining control, reducing manual effort while improving consistency, and responding faster without sacrificing confidence. The stakes are high and the competition for consumers is even higher, particularly against a backdrop of ever-evolving fraud risks, continuously increasing consumer expectations for seamless, digital-first experiences and often limited resources. Nearly half of banks and 59% of credit unions have already deployed generative AI, with more investing now, according to the Cornerstone Advisors’ report. Closing the innovation gap requires a more fundamental shift toward decisioning systems that are connected, scalable, and built for continuous change. A new foundation for decisioning This is where platforms like Experian Decisioning are changing the landscape. By bringing together credit and fraud insights, decision strategies, and a flexible technology architecture, lenders can move beyond fragmented processes and build a more unified, intelligent decisioning approach. One that fits within existing systems but also evolves with your needs. Where to start Impactful change doesn’t need to be an overhaul of everything at once for most organizations. The first step is understanding where your biggest gaps exist, and which decision areas are creating the most friction or missed opportunity. Once you can see where decisioning is not optimized, you can begin to redesign it in a way that’s faster and more adept for what lending has become. By making better decisions, faster, and with greater confidence, lenders can process applications more efficiently and also break away from the pack by leveraging decisioning as a strategic advantage. Learn more

by Stefani Wendel 2 min read March 26, 2026

Model inventories are rapidly expanding. AI-enabled tools are entering workflows that were once deterministic and decisioning environments are more interconnected than ever. At the same time, regulatory scrutiny around model risk management continues to intensify. In many institutions, classification determines validation depth, monitoring intensity, and escalation pathways while informing board reporting. If classification is wrong, every downstream control is misaligned. And, in 2026, model classification is no longer just about assigning a tier, but rather about understanding data lineage, use case evolution, interdependencies, and governance accountability in a decentralized, AI-driven environment. We recently spoke with Mark Longman, Director of Analytics and Regulatory Technology, and here are some of his thoughts around five blind spots risk and compliance leaders should consider addressing now. 1. The “Set It and Forget It” Mentality The Blind Spot Model classification frameworks are often designed during a regulatory remediation effort or inventory modernization initiative. Once documented and approved, they can remain largely unchanged for years. However, model risk management is an ongoing process. “There’s really no sort of one and done when it comes to model risk management,” said Longman. Why It Matters Classification is not merely descriptive, it’s prescriptive. It drives the depth of validation, the frequency of monitoring, the intensity of governance oversight and the level of senior management visibility. As Longman notes, data fragmentation is compounding the challenge. “There’s data everywhere – internal, cloud, even shadow IT – and it’s tough to get a clear view into the inputs into the models,” he said. When inputs are unclear, tiering becomes inherently subjective and if classification frameworks are not reviewed regularly, governance intensity can become misaligned with real exposure. Therefore, static classification is a growing risk, especially in a world of rapidly expanding AI use cases. In a supervisory environment that continues to scrutinize model definitions, particularly as AI tools proliferate, a dynamic, periodically refreshed classification process can demonstrate institutional vigilance. 2. Assuming Third-Party Models Reduce Governance Accountability The Blind SpotThere is often an implicit belief that vendor-provided models carry less governance burden because they were developed externally. Why It Matters Vendor provided models continue to grow, particularly in AI-driven solutions, but supervisory expectations remain firm. “Third-party models do not diminish the responsibility of the institution for its governance and oversight of the model – whether it’s monitoring, ongoing validation, just evaluating drift model documentation,” Longman said. “The board and senior managers are responsible to make sure that these models are performing as expected and that includes third-party models.” Regulators consistently emphasize that institutions remain responsible for the outcomes produced by models used in their decisioning environments, regardless of origin. If a vendor model influences credit approvals, pricing, fraud decisions, or capital calculations, it directly affects customers, financial performance and compliance exposure. Treating third-party models as inherently lower risk can also distort internal tiering frameworks. When vendor models are under-classified, validation depth and monitoring rigor may be insufficient relative to their true impact. 3. Limited Situational Awareness of Model Interdependencies The Blind Spotfeed multiple downstream models simultaneously. Why It Matters Risk often flows across interdependencies. When upstream models degrade in performance or introduce bias, downstream models inherit that exposure. If multiple material decisions depend on the same data transformation or feature engineering process, concentration risk emerges. Without visibility into these dependencies, tiering assessments may underestimate cumulative risk, and monitoring frameworks may fail to detect systemic vulnerabilities. “There has to be a holistic view of what models are being used for – and really somebody to ensure there’s not that overlap across models,” Longman said. Supervisors are increasingly interested in understanding how model risk propagates through business processes. When institutions cannot articulate how models interact, it raises broader concerns about situational awareness and control effectiveness. Therefore, capturing interdependencies within the classification framework enhances more than documentation. It enables more accurate tiering, more targeted monitoring and more informed governance oversight. 4. Excluding Models Without Defensible Rationale The Blind SpotGray-area tools frequently sit outside formal inventories: rule-based engines, spreadsheet models, scenario calculators, heuristic decision aids, or emerging AI tools used for analysis and summarization. These tools may not neatly fit legacy definitions of a “model,” and so they are sometimes excluded without robust documentation. Why It Matters Regulatory definitions of “model” have broadened over time. What creates risk is the absence of defensible reasoning and documentation. Longman describes the risk clearly: “Some [teams] are deploying AI solutions that are sort of unbeknownst to the model risk management community – and almost creating what you might think of as a shadow model inventory.” Without visibility, institutions cannot confidently characterize use, trace inputs, or assign appropriate tiers, according to Longman. It also undermines the credibility of the official inventory during examinations. A well-governed program can articulate why certain tools fall outside model risk management scope, referencing documented criteria aligned with regulatory guidance. Without that evidence, exclusions can appear arbitrary, suggesting gaps in oversight. 5. Inconsistent or Subjective Classification Frameworks The Blind SpotAs inventories scale and governance teams expand, classification decisions are often distributed across reviewers. Over time, discrepancies can emerge. Why It Matters Inconsistency undermines both risk management and regulatory confidence. If two models with comparable use cases and impact profiles are assigned different tiers without clear justification, it signals that the framework is not being applied uniformly. AI adds even more complexity. When it comes to emerging AI model governance versus traditional model governance, there’s a lot to unpack, says Longman: “The AI models themselves are a lot more complicated than your traditional logistic or multiple regression models. The data, the prompting, you need to monitor the prompts that the LLMs for example are responding to and you need to make sure you can have what you may think of as prompt drift,” Longman said. As frameworks evolve, particularly to incorporate AI, automation, and new regulatory interpretations, institutions must ensure that changes are cascaded across the entire inventory. Partial updates or selective reclassification introduce fragmentation. Longman recommends formalizing classification through a structured decision tree embedded in policy to ensure consistent outcomes across business units. Beyond clear documentation, a strong classification program is applied consistently, measured objectively, and periodically reassessed across the full portfolio. BONUS – 6. Elevating Classification with Data-Level Visibility Some institutions are extending classification discipline beyond models to the data layer itself. Longman describes organizations that maintain not only a model inventory, but a data inventory, mapping variables to the models they influence. This approach allows institutions to quickly assess downstream effects when operational or environmental changes occur including system updates or even natural disasters affecting payment behavior. In an AI-driven environment, traceability may become a competitive differentiator. Conclusion Model classification is foundational. It determines how risk is measured, monitored, escalated, and reported. In a rapidly evolving regulatory and technological environment, it cannot remain static. Institutions that invest now in transparency, consistency, and data-level visibility will not only reduce supervisory friction – they will build a governance framework capable of supporting the next generation of AI-enabled decisioning. Learn more

by Stefani Wendel 2 min read March 20, 2026

In today’s digital lending landscape, fraudsters are more sophisticated, coordinated, and relentless than ever. For companies like Terrace Finance — a specialty finance platform connecting over 5,000 merchants, consumers, and lenders — effectively staying ahead of these threats is a major competitive advantage. That is why Terrace Finance partnered with NeuroID, a part of Experian, to bring behavioral analytics into their fraud prevention strategy. It has given Terrace’s team a proactive, real-time defense that is transforming how they detect and respond to attacks — potentially stopping fraud before it ever reaches their lending partners. The challenge: Sophisticated fraud in a high-stakes ecosystem Terrace Finance operates in a complex environment, offering financing across a wide range of industries and credit profiles. With applications flowing in from countless channels, the risk of fraud is ever-present. A single fraudulent transaction can damage lender relationships or even cut off financing access for entire merchant groups. According to CEO Andy Hopkins, protecting its partners is a top priority for Terrace:“We know that each individual fraud attack can be very costly for merchants, and some merchants will get shut off from their lending partners because fraud was let through ... It is necessary in this business to keep fraud at a tolerable level, with the ultimate goal to eliminate it entirely.” Prior to NeuroID, Terrace was confident in its ability to validate submitted data. But with concerns about GenAI-powered fraud growing, including the threat of next-generation fraud bots, Terrace sought out a solution that could provide visibility into how data was being entered and detect risk before applications are submitted. The solution: Behavioral analytics from NeuroID via Experian After integrating NeuroID through Experian’s orchestration platform, Terrace gained access to real-time behavioral signals that detected fraud before data was even submitted. Just hours after Terrace turned NeuroID on, behavioral signals revealed a major attack in progress — NeuroID enabled Terrace to respond faster than ever and reduce risk immediately. “Going live was my most nerve-wracking day. We knew we would see data that we have never seen before and sure enough, we were right in the middle of an attack,” Hopkins said. “We thought the fraud was a little more generic and a little more spread out. What we found was much more coordinated activities, but this also meant we could bring more surgical solutions to the problem instead of broad strokes.” Terrace has seen significant results with NeuroID in place, including: Together, NeuroID and Experian enabled Terrace to build a layered, intelligent fraud defense that adapts in real time. A partnership built on innovation Terrace Finance’s success is a testament to what is  possible when forward-thinking companies partner with innovative technology providers. With Experian’s fraud analytics and NeuroID’s behavioral intelligence, they have built a fraud prevention strategy that is proactive, precise, and scalable. And they are not stopping there. Terrace is now working with Experian to explore additional tools and insights across the ecosystem, continuing to refine their fraud defenses and deliver the best possible experience for genuine users. “We use the analogy of a stream,” Hopkins explained. “Rocks block the flow, and as you remove them, it flows better. But that means smaller rocks are now exposed. We can repeat these improvements until the water flows smoothly.” Learn more about Terrace Finance and NeuroID Want more of the story? Read the full case study to explore how behavioral analytics provided immediate and long-term value to Terrace Finance’s innovative fraud prevention strategy. Read case study

by Allison Lemaster 2 min read September 3, 2025