Julie Lee is a Marketing Manager at Experian, specializing in thought leadership at the intersection of data, credit risk, fraud prevention, and customer experience. With over a decade of experience in content strategy for regulated industries, she helps organizations turn data into actionable insights by strengthening decisioning, mitigating risk, and supporting more seamless customer journeys.

Areas of expertise: Data breach, fraud & identity management, data & analytics

Industry: Financial services

-- Julie Lee

All posts by Julie Lee

Loading...

Data breaches continue to be a reality for organizations across industries, and the complexity of responding to them is only increasing. From AI-driven fraud to third-party exposures, the risk landscape is shifting fast. Having a modern and tested response plan is essential to containing the damage, protecting your customers, and preserving your organization’s reputation when a breach occurs. Experian’s eleventh annual Data Breach Response Guide draws on decades of breach support experience. It offers practical strategies and insights for navigating the moments that matter most: the first hours after a breach and the days that follow. The 2025–2026 guide explores: How AI is shaping new breach and fraud patterns Where organizations are most vulnerable, including third-party and supply chain weak points Consumer expectations and how they influence crisis response How prepared organizations are reducing impact and protecting trust What is required to build a modern, effective breach response plan Organizations with a tested plan can potentially reduce the cost, impact, and long-term consequences of a breach. From real-world case insights to crisis communication templates, this guide is designed to help teams act quickly and confidently. Download the 2025–2026 Data Breach Response Guide to learn how you can strengthen your breach preparedness, reduce risk exposure, and build resilience against the next wave of cybersecurity threats. Download guide

Published: August 4, 2025 by Julie Lee

Now in its tenth year, Experian’s U.S. Identity and Fraud Report continues to uncover the shifting tides of fraud threats and how consumers and businesses are adapting. Our latest edition sheds light on a decade of change and unveils what remains consistent: trust is still the cornerstone of digital interactions. This year’s report draws on insights from over 2,000 U.S. consumers and 200 businesses to explore how identity, fraud and trust are evolving in a world increasingly shaped by generative artificial intelligence (GenAI) and other emerging technologies. Highlights: Over a third of companies are using AI, including generative AI, to combat fraud. 72% of business leaders anticipate AI-generated fraud and deepfakes as major challenges by 2026. Nearly 60% of companies report rising fraud losses, with identity theft and payment fraud as top concerns. Digital anxiety persists with 57% of consumers worried about doing things online. Ready to go deeper? Explore the full findings and discover how your organization can lead with confidence in an evolving fraud landscape. Download report Watch on-demand webinar Read press release  

Published: August 1, 2025 by Julie Lee

In early 2025, European authorities shut down a cybercriminal operation called JokerOTP, responsible for over 28,000 phishing attacks across 13 countries. According to Forbes, the group used one-time password (OTP) bots to bypass two-factor authentication (2FA), netting an estimated $10 million in fraudulent transactions. It's just one example of how fraudsters are exploiting digital security gaps with AI and automation. What is an OTP bot? An OTP bot is an automated tool designed to trick users into revealing their one-time password, a temporary code used in multifactor authentication (MFA). These bots are often paired with stolen credentials, phishing sites or social engineering to bypass security steps and gain unauthorized access. Here’s how a typical OTP bot attack works: A fraudster logs in using stolen credentials. The user receives an OTP from their provider. Simultaneously, the OTP bot contacts the user via SMS, call or email, pretending to be the institution and asking for the OTP. If the user shares the OTP, the attacker gains control of the account. The real risk: account takeover OTP bots are often just one part of a larger account takeover strategy. Once a bot bypasses MFA, attackers can: Lock users out of their accounts Change contact details Drain funds or open fraudulent lines of credit Stopping account takeover means detecting and disrupting the attack before access is gained. That’s where strong account takeover/login defense becomes critical, monitoring suspicious login behaviors and recognizing high-risk signals early. How accessible are OTP bots? Mentions of OTP bots on dark web forums jumped 31% in 2024. Bot services offering OTP bypass tools were being sold for just $10 to $50 per attack. One user on a Telegram-based OTP bot platform reported earning $50,000 in a month.   The barrier to entry for fraudsters is low, and these figures highlight just how easy and profitable it is to launch OTP bot attacks at scale. The evolution of fraud bots OTP bots are one part of the rising wave of fraud bots. According to our report, The Fraud Attack Strategy Guide, bots accounted for 30% of fraud attempts at the beginning of 2024. By the end of the year, that number had risen to 80% — a nearly threefold increase in just 12 months. Today’s fraud bots are more dynamic and adaptive than before. They go beyond simple scripts, mimicking human behavior, shifting tactics in real time and launching large-scale bot attacks across platforms. Some bypass OTPs entirely or refine their tactics with each failed attempt. With generative AI in the mix, bot-based fraud is getting faster, cheaper and harder to detect. Effective fraud defense now depends on detecting intent, analyzing behavior in real time and stopping threats earlier in the process. Read this blog: Learn more about identifying and stopping bot attacks. A cross-industry problem OTP bots can target any organization that leverages 2FA, but the impact varies by sector. Financial services, fintech and buy now, pay later (BNPL) providers are top targets for OTP bot attacks due to high-value accounts, digital onboarding and reliance on 2FA. In one case outlined in The Fraud Strategy Attack Guide, a BNPL provider saw 25,000+ bot attempts in 90 days, with over 3,000 bots completing applications, bypassing OTP or using synthetic identities. Retail and e-commerce platforms face attacks designed to take over customer accounts and make unauthorized purchases using stored payment methods, gift cards or promo credits. OTP bots can help fraudsters trigger and intercept verification codes tied to checkout or login flows. Healthcare and education organizations can be targeted for their sensitive data and widespread use of digital portals. OTP bots can help attackers access patient records, student or staff accounts, or bypass verification during intake and application flows, leading to phishing, insurance fraud or data theft. Government and public sector entities are increasingly vulnerable as fraudsters exploit digital services meant for public benefits. OTP bots may be used to sign up individuals for disbursements or aid programs without their knowledge, enabling fraudsters to redirect payments or commit identity theft. This abuse not only harms victims but also undermines trust in the public system. Across sectors, the message is clear: the bots are getting in too far before being detected. Organizations across all industries need the ability to recognize bot risk at the very first touchpoint; the earlier the better. The limitations of OTP defense OTP is a strong second factor, but it’s not foolproof. If a bot reaches the OTP stage, it's highly likely that they've already: Stolen or purchased valid credentials Found a way to trigger the OTP Put a social engineering play in motion Fighting bots earlier in the funnel The most effective fraud prevention doesn’t just react to bots at the OTP step; it stops them before they trigger OTPs in the first place. But to do that, you need to understand how modern bots operate and how our bot detection solutions, powered by NeuroID, fight back. The rise of GenAI-powered bots Bot creation has become dramatically easier. Thanks to generative AI and widely available bot frameworks, fraudsters no longer need deep technical expertise to launch sophisticated attacks. Today’s Gen4 bots can simulate human-like interactions such as clicks, keystrokes, and mouse movements with just enough finesse to fool traditional bot detection tools. These bots are designed to bypass security controls, trigger OTPs, complete onboarding flows, and even submit fraudulent applications. They are built to blend in. Detecting bots across two key dimensions Our fraud detection solutions are purpose-built to uncover these threats by analyzing risk signals across two critical dimensions. 1. Behavioral patternsEven the most advanced bots struggle to perfectly mimic human behavior. Our tools analyze thousands of micro-signals to detect deviations, including: Mouse movement smoothness and randomness Typing cadence, variability and natural pauses Field and page transition timing Cursor trajectory and movement velocity Inconsistent or overly “perfect” interaction patterns By identifying unnatural rhythms or scripted inputs, we can distinguish real users from automation before the OTP step. 2. Device and network intelligenceIn parallel, our technology examines device and network indicators that often reveal fraud at scale: Detection of known bot frameworks and automation tools Device fingerprinting to flag repeat offenders Link analysis connecting devices across multiple sessions or identities IP risk, geolocation anomalies and device emulation signals This layered approach helps identify fraud rings and coordinated bot attacks, even when attackers attempt to mask their activity. A smarter way to stop bots We offer both a highly responsive, real-time API for instant bot detection and a robust dashboard for investigative analytics. This combination allows fraud teams to stop bots earlier in the funnel — before they trigger OTPs, fill out forms, or submit fake credentials — and to analyze emerging trends across traffic patterns. Our behavioral analytics, combined with device intelligence and adaptive risk modeling, empowers organizations to act on intent rather than just outcomes. Good users move forward without friction. Bad actors are stopped at the source. Ready to stop bots in their tracks? Explore Experian’s fraud prevention services. Learn more *This article includes content created by an AI language model and is intended to provide general information.

Published: July 29, 2025 by Julie Lee

Fraud never sleeps, and neither do the experts working to stop it. That’s why we’re back with episode two of Meet the Maker, our video series spotlighting the brilliant minds behind Experian’s cutting-edge fraud solutions. In this episode, Nash Ali, Head of Operational Strategy, and Dave Tiezzi, Senior Vice President of Payments and New Markets, share how the power of NeuroID’s behavioral analytics and device and network intelligence, combined with Experian Link’s credit card owner verification, helps e-commerce merchants combat key fraud threats, while providing a seamless checkout experience. With decades of experience in payments and fraud, these fraud-fighting experts know exactly what it takes to stop fraud, minimize friction, and reduce chargebacks so e-commerce merchants can protect the most crucial stage of the buying process. Watch now for an exclusive look at the minds shaping the future of fraud prevention.  Interested in learning more about our fraud management solutions? Watch previous episode Learn more

Published: July 8, 2025 by Lauren Makowski

Bot fraud has long been a major concern for digital businesses, but evolving attacks at all stages in the customer lifecycle have overshadowed an ever-present issue: click fraud. Click fraud is a cross-departmental challenge for businesses, and stopping it requires a level of insight and understanding that many businesses don’t yet have. It’s left many fraud professionals asking: What is click fraud? Why is it so dangerous? How can it be prevented? What is click fraud? A form of bot fraud, click fraud occurs when bots drive fraudulent clicks to websites, digital ads, and emails. Click fraud typically exploits application flows or digital advertising; traffic from click bots appears to be genuine but is actually fraudulent, incurring excessive costs through API calls or ad clicks. These fraudulent clicks won’t result in any sales but will reveal sensitive information, inflate costs, and clutter data. What is the purpose of click fraud? It depends on the target. We've seen click bots begin (but not complete) insurance quotes or loan applications, gathering information on competitors’ rates. In other cases, fraudsters use click fraud to drive artificial clicks to ads on their sites, resulting in increased revenue from PPC/CPC advertising. The reasons behind click fraud vary widely, but, regardless of its intent, the impacts of it affect businesses deeply. The dangers of click fraud On the surface, click fraud may seem less harmful than other types of fraud. Unlike application fraud and account takeover fraud, consumers’ data isn’t being stolen, and fraud losses are relatively minuscule. But click fraud can still be detrimental to businesses' bottom lines: every API call incurred by a click bot is an additional expense, and swarms of click bots distort data that’s invaluable to fraud attack detection and customer acquisition. The impact of click fraud extends beyond that, though. Not only can click bots gather sensitive data like insurance quotes, but click fraud can also be a gateway to more insidious fraud schemes. Fraud rings are constantly looking for vulnerabilities in businesses’ systems, often using bots to probe for back-door entrances to applications and ways to bypass fraud checks. For example: if an ad directs to an unlisted landing page that provides an alternate entry to a business’s ecosystem, fraudsters can identify this through click fraud and use bots to find vulnerabilities in the alternate application process. In doing so, they lay the groundwork for larger attacks with more tangible losses. Keys to click fraud prevention Without the right tools in place, modern bots can appear indistinguishable from humans — many businesses struggle to identify increasingly sophisticated bots on their websites as a result. Allowing click fraud to remain undetected can make it extremely difficult to know when a more serious fraud attack is at your doorstep. Preventing click fraud requires real-time visibility into your site’s traffic, including accurate bot detection and analysis of bot behavior. It’s one of many uses for behavioral analytics in fraud detection: behavioral analytics identifies advanced bots pre-submit, empowering businesses to better differentiate click fraud from genuine traffic and other fraud types. With behavioral analytics, bot attacks can be detected and stopped before unnecessary costs are incurred and sensitive information is revealed. Learn more about our behavioral analytics for fraud detection.

Published: June 12, 2025 by Devon Smith

Fake IDs have been around for decades, but today’s fraudsters aren’t just printing counterfeit driver’s licenses — they’re using artificial intelligence (AI) to create synthetic identities. These AI fake IDs bypass traditional security checks, making it harder for businesses to distinguish real customers from fraudsters. To stay ahead, organizations need to rethink their fraud prevention solutions and invest in advanced tools to stop bad actors before they gain access. The growing threat of AI Fake IDs   AI-generated IDs aren’t just a problem for bars and nightclubs; they’re a serious risk across industries. Fraudsters use AI to generate high-quality fake government-issued IDs, complete with real-looking holograms and barcodes. These fake IDs can be used to commit financial fraud, apply for loans or even launder money. Emerging services like OnlyFake are making AI-generated fake IDs accessible. For $15, users can generate realistic government-issued IDs that can bypass identity verification checks, including Know Your Customer (KYC) processes on major cryptocurrency exchanges.1 Who’s at risk? AI-driven identity fraud is a growing problem for: Financial services – Fraudsters use AI-generated IDs to open bank accounts, apply for loans and commit credit card fraud. Without strong identity verification and fraud detection, banks may unknowingly approve fraudulent applications. E-commerce and retail – Fake accounts enable fraudsters to make unauthorized purchases, exploit return policies and commit chargeback fraud. Businesses relying on outdated identity verification methods are especially vulnerable. Healthcare and insurance – Fraudsters use fake identities to access medical services, prescription drugs or insurance benefits, creating both financial and compliance risks. The rise of synthetic ID fraud Fraudsters don’t just stop at creating fake IDs — they take it a step further by combining real and fake information to create entirely new identities. This is known as synthetic ID fraud, a rapidly growing threat in the digital economy. Unlike traditional identity theft, where a criminal steals an existing person’s information, synthetic identity fraud involves fabricating an identity that has no real-world counterpart. This makes detection more difficult, as there’s no individual to report fraudulent activity. Without strong synthetic fraud detection measures in place, businesses may unknowingly approve loans, credit cards or accounts for these fake identities. The deepfake threat AI-powered fraud isn’t limited to generating fake physical IDs. Fraudsters are also using deepfake technology to impersonate real people. With advanced AI, they can create hyper-realistic photos, videos and voice recordings to bypass facial recognition and biometric verification. For businesses relying on ID document scans and video verification, this can be a serious problem. Fraudsters can: Use AI-generated faces to create entirely fake identities that appear legitimate Manipulate real customer videos to pass live identity checks Clone voices to trick call centers and voice authentication systems As deepfake technology improves, businesses need fraud prevention solutions that go beyond traditional ID verification. AI-powered synthetic fraud detection can analyze biometric inconsistencies, detect signs of image manipulation and flag suspicious behavior. How businesses can combat AI fake ID fraud Stopping AI-powered fraud requires more than just traditional ID checks. Businesses need to upgrade their fraud defenses with identity solutions that use multidimensional data, advanced analytics and machine learning to verify identities in real time. Here’s how: Leverage AI-powered fraud detection – The same AI capabilities that fraudsters use can also be used against them. Identity verification systems powered by machine learning can detect anomalies in ID documents, biometrics and user behavior. Implement robust KYC solutions – KYC protocols help businesses verify customer identities more accurately. Enhanced KYC solutions use multi-layered authentication methods to detect fraudulent applications before they’re approved. Adopt real-time fraud prevention solutions – Businesses should invest in fraud prevention solutions that analyze transaction patterns and device intelligence to flag suspicious activity. Strengthen synthetic identity fraud detection – Detecting synthetic identities requires a combination of behavioral analytics, document verification and cross-industry data matching. Advanced synthetic fraud detection tools can help businesses identify and block synthetic identities. Stay ahead of AI fraudsters AI-generated fake IDs and synthetic identities are evolving, but businesses don’t have to be caught off guard. By investing in identity solutions that leverage AI-driven fraud detection, businesses can protect themselves from costly fraud schemes while ensuring a seamless experience for legitimate customers. At Experian, we combine cutting-edge fraud prevention, KYC and authentication solutions to help businesses detect and prevent AI-generated fake ID and synthetic ID fraud before they cause damage. Our advanced analytics, machine learning models and real-time data insights provide the intelligence businesses need to outsmart fraudsters. Learn more *This article includes content created by an AI language model and is intended to provide general information. 1 https://www.404media.co/inside-the-underground-site-where-ai-neural-networks-churns-out-fake-ids-onlyfake/

Published: March 20, 2025 by Julie Lee

Fraud rings cause an estimated $5 trillion in financial damages every year, making them one of the most dangerous threats facing today’s businesses. They’re organized, sophisticated and only growing more powerful with the advent of Generative AI (GenAI). Armed with advanced tools and an array of tried-and-true attack strategies, fraud rings have perfected the art of flying under the radar and circumventing traditional fraud detection tools. Their ability to adapt and innovate means they can identify and exploit vulnerabilities in businesses' fraud stacks; if you don’t know how fraud rings work and the right signs to look for, you may not be able to catch a fraud ring attack until it’s too late. What is a fraud ring? A fraud ring is an organized group of cybercriminals who collaborate to execute large-scale, coordinated attacks on one or more targets. These highly sophisticated groups leverage advanced techniques and technologies to breach fraud defenses and exploit vulnerabilities. In the past, they were primarily humans working scripts at scale; but with GenAI they’re increasingly mobilizing highly sophisticated bots as part of (or the entirety of) the attack. Fraud ring attacks are rarely isolated incidents. Typically, these groups will target the same victim multiple times, leveraging insights gained from previous attack attempts to refine and enhance their strategies. This iterative approach enables them to adapt to new controls and increase their impact with each subsequent attack. The impacts of fraud ring attacks far exceed those of an individual fraudster, incurring significant financial losses, interrupting operations and compromising sensitive data. Understanding the keys to spotting fraud rings is crucial for crafting effective defenses to stop them. Uncovering fraud rings There’s no single tell-tale sign of a fraud ring. These groups are too agile and adaptive to be defined by one trait. However, all fraud rings — whether it be an identity fraud ring, coordinated scam effort, or large-scale ATO fraud scheme — share common traits that produce warning signs of imminent attacks. First and foremost, fraud rings are focused on efficiency. They work quickly, aiming to cause as much damage as possible. If the fraud ring’s goal is to open fraudulent accounts, you won’t see a fraud ring member taking their time to input stolen data on an application; instead, they’ll likely copy and paste data from a spreadsheet or rely on fraud bots to execute the task. Typically, the larger the fraud ring attack, the more complex it is. The biggest fraud rings leverage a variety of tools and strategies to keep fraud teams on their heels and bypass traditional fraud defenses. Fraud rings often test strategies before launching a full-scale attack. This can look like a small “probe” preceding a larger attack, or a mass drop-off after fraudsters have gathered the information they needed from their testing phase. Fraud ring detection with behavioral analytics Behavioral analytics in fraud detection uncovers third-party fraud, from large-scale fraud ring operations and sophisticated bot attacks to individualized scams. By analyzing user behavior, organizations can effectively detect and mitigate these threats. With behavioral analytics, businesses have a new layer of fraud ring detection that doesn’t exist elsewhere in their fraud stack. At a crowd level, behavioral analytics reveals spikes in risky behavior, including fraud ring testing probes, that may indicate a forthcoming fraud ring attack, but would typically be hidden by sheer volume or disregarded as normal traffic. Behavioral analytics also identifies the high-efficiency techniques that fraud rings use, including copy/paste or “chunking” behaviors, or the use of advanced fraud bots designed to mimic human behavior. Learn more about our behavioral analytics solutions and their fraud ring detection capabilities. Learn more

Published: February 27, 2025 by Presten Swenson

Fraud never sleeps, and neither do the experts working to stop it. That’s why we’re thrilled to introduce Meet the Maker, our new video series spotlighting the brilliant minds behind Experian’s cutting-edge fraud solutions. In our first episode, Matt Ehrlich, Senior Director of Identity and Fraud Product Management, and Andrea Nighswander, Senior Director of Global Solution Strategy, share how they use data, advanced analytics, and deep industry expertise to stay ahead of fraudsters. With 35+ years of combined experience, these fraud-fighting veterans know exactly what it takes to keep bad actors at bay. Watch now for an exclusive look at the minds shaping the future of fraud prevention.    Stay tuned for more episodes featuring the visionaries driving fraud innovation.

Published: February 21, 2025 by Julie Lee

The days of managing credit risk, fraud prevention, and compliance in silos are over. As fraud threats evolve, regulatory scrutiny increases, and economic uncertainty persists, businesses need a more unified risk strategy to stay ahead. Our latest e-book, Navigating the intersection of credit, fraud, and compliance, explores why 94% of forward-looking companies expect credit, fraud, and compliance to converge within the next three years — and what that means for your business.1 Key insights include: The line between fraud and credit risk is blurring. Many organizations classify first-party fraud losses as credit losses, distorting the true risk picture. Fear of fraud is costing businesses growth. 68% of organizations say they’re denying too many good customers due to fraud concerns. A unified approach is the future. Integrating risk decisioning across credit, fraud, and compliance leads to stronger fraud detection, smarter credit risk assessments, and improved compliance. Read the full e-book to explore how an integrated risk approach can protect your business and fuel growth. Download e-book 1Research conducted by InsightAvenue on behalf of Experian

Published: February 20, 2025 by Julie Lee

Picture this: you’re sipping your morning coffee when an urgent email from your CEO pops up in your inbox, requesting sensitive information. Everything about it seems legit — their name, email address, even their usual tone. But here’s the twist: it’s not actually them. This is the reality of spoofing attacks. And these scenarios aren’t rare. According to the Federal Bureau of Investigation (FBI), spoofing/phishing is the most common type of cybercrime.¹   In these attacks, bad actors disguise their identity to trick individuals or systems into believing the communication is from a trusted source. Whether it’s email spoofing, caller ID spoofing, or Internet Protocol (IP) spoofing, the financial and reputational consequences can be severe. By understanding how these attacks work and implementing strong defenses, organizations can reduce their risk and protect sensitive information. Let’s break down the key strategies for staying one step ahead of cybercriminals. What is a spoofing attack? A spoofing attack occurs when a threat actor impersonates a trusted source to gain access to sensitive information, disrupt operations or manipulate systems. Common types of spoofing attacks include: Email spoofing: Fraudulent emails are carefully crafted to mimic legitimate senders, often including convincing details like company logos, real employee names, and professional formatting. These emails trick recipients into sharing sensitive information, such as login credentials or financial details, or prompt them to download malware disguised as attachments. For example, attackers might impersonate a trusted vendor to redirect payments or a senior executive requesting immediate access to confidential data. Caller ID spoofing: Attackers manipulate phone numbers to impersonate trusted contacts, making calls appear as if they are coming from legitimate organizations or individuals. This tactic is often used to extract sensitive information, such as account credentials, or to trick victims into making payments. For instance, a scammer might pose as a bank representative calling to warn of suspicious activity on an account, coercing the recipient into sharing private information or transferring funds. IP spoofing: IP addresses are falsified to disguise the origin of malicious traffic to bypass security measures and mask malicious activity. Cybercriminals use this method to redirect traffic, conduct man-in-the-middle attacks, where a malicious actor intercepts and possibly alters the communication between two parties without their knowledge, or overwhelm systems with distributed denial-of-service (DDoS) attacks. For example, attackers might alter the source IP address of a data packet to appear as though it is coming from a trusted source, making it easier to infiltrate networks and compromise sensitive data. These tactics are often used in conjunction with other cyber threats, such as phishing or bot fraud, making detection and prevention more challenging. How behavioral analytics can combat spoofing attacks Traditional fraud prevention methods provide a strong foundation but behavioral analytics adds a powerful layer to fraud stacks. By examining user behavior patterns, behavioral analytics enhances existing tools to: Detect anomalies that signal a spoofing attack. Identify bot fraud attempts, where automated scripts mimic legitimate users. Enhance fraud prevention solutions with friction-free, real-time insights. Behavioral analytics is particularly effective when paired with device and network intelligence and machine learning (ML) solutions. These advanced tools can continuously adapt to new fraud tactics, ensuring robust protection against evolving threats. The role of artificial intelligence (AI) and ML in spoofing attack prevention AI fraud detection is revolutionizing how organizations protect themselves from spoofing attacks. By leveraging AI analytics and machine learning solutions, organizations can: Analyze vast amounts of data to identify spoofing patterns. Automate threat detection and response. Strengthen overall fraud prevention strategies. These technologies are essential for staying ahead of cybercriminals, particularly as they increasingly use AI to perpetrate attacks.   Best practices for preventing spoofing attacks Organizations can take proactive steps to minimize the risk of spoofing attacks. Key strategies include: Implementing robust authentication protocols: Use multifactor authentication (MFA) to verify the identity of users and systems. Monitoring network traffic: Deploy tools that can analyze traffic for signs of IP spoofing or other anomalies. Leveraging behavioral analytics: Adopt advanced fraud prevention solutions that include behavioral analytics to detect and mitigate threats. Educating employees: Provide training on recognizing phishing attempts and other spoofing tactics. Partnering with fraud prevention experts: Collaborate with trusted providers like Experian to access cutting-edge solutions tailored to your needs. Why proactive prevention matters The financial and reputational damage caused by spoofing attacks can be devastating. Organizations that fail to implement effective prevention measures risk: Losing customer trust. Facing regulatory penalties. Incurring significant financial losses. Businesses can stay ahead of cyber threats by prioritizing spoofing attack prevention and leveraging advanced technologies such as behavioral analytics, AI fraud detection, and machine learning, Investing in fraud prevention solutions today is essential for protecting your organization’s future. How we help organizations detect spoofing attacks Spoofing attacks are an ever-present danger in the digital age. With tactics like IP spoofing and bot fraud becoming more sophisticated, businesses must adopt advanced strategies to safeguard their operations. Our comprehensive suite of fraud prevention solutions can help businesses tackle spoofing attacks and other cyber threats. Our advanced technologies like behavioral analytics, AI fraud detection and machine learning solutions, enable organizations to: Identify and respond to spoofing attempts in real-time. Detect anomalies and patterns indicative of fraudulent behavior. Strengthen defenses against bot fraud and IP spoofing. Ensure compliance with industry regulations and standards. Click ‘learn more’ below to explore how we can help protect your organization. Learn more 1 https://www.ic3.gov/AnnualReport/Reports/2023_IC3Report.pdf This article includes content created by an AI language model and is intended to provide general information. 

Published: January 27, 2025 by Julie Lee

Bots have been a consistent thorn in fraud teams’ side for years. But since the advent of generative AI (genAI), what used to be just one more fraud type has become a fraud tsunami. This surge in fraud bot attacks has brought with it:  A 108% year-over-year increase in credential stuffing to take over accounts1  A 134% year-over-year increase in carding attacks, where stolen cards are tested1  New account opening fraud at more than 25% of businesses in the first quarter of 2024  While fraud professionals rush to fight back the onslaught, they’re also reckoning with the ever-evolving threat of genAI. A large factor in fraud bots’ new scalability and strength, genAI was the #1 stress point identified by fraud teams in 2024, and 70% expect it to be a challenge moving forward, according to Experian’s U.S. Identity and Fraud Report.  This fear is well-founded. Fraudsters are wasting no time incorporating genAI into their attack arsenal. GenAI has created a new generation of fraud bot tools that make bot development more accessible and sophisticated. These bots reverse-engineer fraud stacks, testing the limits of their targets’ defenses to find triggers for step-ups and checks, then adapt to avoid setting them off.   How do bot detection solutions fare against this next generation of bots?  The evolution of fraud bots   The earliest fraud bots, which first appeared in the 1990s2 , were simple scripts with limited capabilities. Fraudsters soon began using these scripts to execute basic tasks on their behalf — mainly form spam and light data scraping. Fraud teams responded, implementing bot detection solutions that continued to evolve as the threats became more sophisticated.   The evolution of fraud bots was steady — and mostly balanced against fraud-fighting tools — until genAI supercharged it. Today, fraudsters are leveraging genAI’s core ability (analyzing datasets and identifying patterns, then using those patterns to generate solutions) to create bots capable of large-scale attacks with unprecedented sophistication. These genAI-powered fraud bots can analyze onboarding flows to identify step-up triggers, automate attacks at high-volume times, and even conduct “behavior hijacking,” where bots record and replicate the behaviors of real users.  How next-generation fraud bots beat fraud stacks  For years, a tried-and-true tool for fraud bot detection was to look for the non-human giveaways: lightning-fast transition speeds, eerily consistent keystrokes, nonexistent mouse movements, and/or repeated device and network data were all tell-tale signs of a bot. Fraud teams could base their bot detection strategies off of these behavioral red flags.  Stopping today’s next-generation fraud bots isn’t quite as straightforward. Because they were specifically built to mimic human behavior and cycle through device IDs and IP addresses, today’s bots often appear to be normal, human applicants and circumvent many of the barriers that blocked their predecessors. The data the bots are providing is better, too3, fraudsters are using genAI to streamline and scale the creation of synthetic identities.4 By equipping their human-like bots with a bank of high-quality synthetic identities, fraudsters have their most potent, advanced attack avenue to date.   Skirting traditional bot detection with their human-like capabilities, next-generation fraud bots can bombard their targets with massive, often undetected, attacks. In one attack analyzed by NeuroID, a part of Experian, fraud bots made up 31% of a business's onboarding volume on a single day. That’s nearly one-third of the business’s volume comprised of bots attempting to commit fraud. If the business hadn’t had the right tools in place to separate these bots from genuine users, they wouldn’t have been able to stop the attack until it was too late.   Beating fraud bots with behavioral analytics: The next-generation approach  Next-generation fraud bots pose a unique threat to digital businesses: their data appears legitimate, and they look like a human when they’re interacting with a form. So how do fraud teams differentiate fraud bots from an actual human user?  NeuroID’s product development teams discovered key nuances that separate next-generation bots from humans, and we’ve updated our industry-leading bot detection capabilities to account for them. A big one is mousing patterns: random, erratic cursor movements are part of what makes next-generation bots so eerily human-like, but their movements are still noticeably smoother than a real human’s. Other bot detection solutions (including our V1 signal) wouldn’t flag these advanced cursor movements as bot behavior, but our new signal is designed to identify even the most granular giveaways of a next-generation fraud bot.  Fraud bots will continue to evolve. But so will we. For example, behavioral analytics can identify repeated actions — down to the pixel a cursor lands on — during a bot attack and block out users exhibiting those behaviors. Our behavior was built specifically to combat next-gen challenges with scalable, real-time solutions. This proactive protection against advanced bot behaviors is crucial to preventing larger attacks.  For more on fraud bots’ evolution, download our Emerging Trends in Fraud: Understanding and Combating Next-Gen Bots report.  Learn more Sources 1 HUMAN Enterprise Bot Fraud Benchmark Report  2 Abusix 3 NeuroID 4 Biometric Update

Published: December 17, 2024 by James Craddick

There’s a common saying in the fraud prevention industry: where there’s opportunity, fraudsters are quick to follow. Recent advances in technology are providing ample new opportunities for cybercriminals to exploit. One of the most prevalent techniques being observed today is password spraying. From email to financial and health records, consumers and businesses are being impacted by this pervasive form of fraud. Password spraying attacks often fly under the radar of traditional security measures, presenting a unique and growing threat to businesses and individuals.  What is password spraying?  Also known as credential guessing, password spraying involves an attacker applying a list of commonly used passwords against a list of accounts in order to guess the correct password. When password spraying first emerged, an individual might hand key passwords to try to gain access to a user’s account or a business’s management system.   Credential stuffing is a similar type of fraud attack in which an attacker gains access to a victim’s credentials in one system (e.g., their email, etc.) and then attempts to apply those known credentials via a script/bot to a large number of sites in order to gain access to other sites where the victim might be using the same credentials. Both are brute-force attack vectors that eventually result in account takeover (ATO), compromising sensitive data that is subsequently used to scam, blackmail, or defraud the victim.  As password spraying and other types of fraud evolved, fraud rings would leverage “click farms” or “fraud farms” where hundreds of workers would leverage mobile devices or laptops to try different passwords in order to perpetrate fraud attacks on a larger scale. As technology has advanced, bot attacks fueled by generative AI (Gen AI) have taken the place of humans in the fraud ring. Now, instead of hand-keying passwords into systems, workers at fraud farms are able to deploy hundreds or thousands of bots that can work exponentially faster.  The rise and evolution of bots  Bots are not necessarily new to the digital experience — think of the chatbot on a company’s support page that helps you find an answer more quickly. These automated software applications carry out repetitive instructions mimicking human behavior. While they can be helpful, they can also be leveraged by fraudsters, to automate fraud on a brute-force attack, often going undetected resulting in substantial losses.   Generation 4 bots are the latest evolution of these malicious programs, and they’re notoriously hard to detect. Because of their slow, methodical, and deliberate human-like behavior, they easily bypass network-level controls such as firewalls and popular network-layer security.  Stopping Gen4 bots  For any company with a digital presence or that leverages digital networks as part of doing business, the threat from Gen AI enabled fraud is paramount. The traditional stack for fighting fraud including firewalls, CAPTCHA and block lists are not enough in the face of Gen4 bots. Companies at the forefront of fighting fraud are leveraging behavioral analytics to identify and mitigate Gen AI-powered fraud. And many have turned to industry leader, Neuro ID, which is now part of Experian.  Watch our on-demand webinar: The fraud bot future-shock: How to spot & stop next-gen attacks  Behavioral analytics is a key component of passive and continuous authentication and has become table stakes in the fraud prevention space. By measuring how a user interacts with a form field (e.g., a website, mobile app, etc.) our behavioral analytics solutions can determine if the user is: a potential fraudster, a bot, or a genuine user familiar with the PII entered. Because it’s available at any digital engagement, behavioral data is often the most consistent signal available throughout the customer lifecycle and across geographies. It allows risky users to be rejected or put through more rigorous authentication, while trustworthy users get a better experience, protecting businesses and consumers from Gen AI-enabled fraud.  As cyber threats evolve, so must our defenses. Password spraying exemplifies the sophisticated methods and technologies attackers now employ to scale their fraud efforts and gain access to sensitive information. To fight next-generation fraud, organizations must employ next-generation technologies and techniques to better defend themselves against this and other types of cyberattacks.  Experian’s approach embodies a paradigm shift where fraud detection increases efficiency and accuracy without sacrificing customer experience. We can help protect your company from bot attacks, fraudulent accounts and other malicious attempts to access your sensitive data. Learn more about behavioral analytics and our other fraud prevention solutions.  Learn more

Published: December 9, 2024 by Jesse Hoggard

Dormant fraud, sleeper fraud, trojan horse fraud . . . whatever you call it, it’s an especially insidious form of account takeover fraud (ATO) that fraud teams often can’t detect until it’s too late. Fraudsters create accounts with stolen credentials or gain access to existing ones, onboard under the fake identity, then lie low, waiting for an opportunity to attack.   It takes a strategic approach to defeat the enemy from within, and fraudsters assume you won’t have the tools in place to even know where to start.   Dormant fraud uncovered: A case study  NeuroID, a part of Experian, has seen the dangers of dormant fraud play out in real time.  As a new customer to NeuroID, this payment processor wanted to backtest their user base for potential signs of fraud. Upon analyzing their customer base’s onboarding behavioral data, we discovered more than 100K accounts were likely to be dormant fraud. The payment processor hadn’t considered these accounts suspicious and didn’t see any risk in letting them remain active, despite the fact that none of them had completed a transaction since onboarding.  Why did we flag these as risky?  Low familiarity: Our testing revealed behavioral red flags, such as copying and pasting into fields or constant tab switching. These are high indicators that the applicant is applying with personally identifiable information (PII) that isn’t their own.  Fraud clusters: Many of these accounts used the same web browser, device, and IP address during sign-up, suggesting that one fraudster was signing up for multiple accounts. We found hundreds of clusters like these, many with 50 or more accounts belonging to the same device and IP address within our customer’s user base.  It was clear that this payment processor’s fraud stack had gaps that left them vulnerable. These dormant accounts could have caused significant damage once mobilized: receiving or transferring stolen funds, misrepresenting their financial position, or building toward a bust-out.   Dormant fraud thrives in the shadows beyond onboarding. These fraudsters keep accounts “dormant” until they’re long past onboarding detection measures. And once they’re in, they can often easily transition to a higher-risk account — after all, they’ve already confirmed they’re trustworthy. This type of attack can involve fraudulent accounts remaining inactive for months, allowing them to bypass standard fraud detection methods that focus on immediate indicators.   Dormant fraud gets even more dangerous when a hijacked account has built trust just by existing. For example, some banks provide a higher credit line just for current customers, no matter their activities to date. The more accounts an identity has in good standing, the greater the chance that they’ll be mistaken for a good customer and given even more opportunities to commit higher-level fraud.  This is why we often talk to our customers about the idea of progressive onboarding as a way to overcome both dormant fraud risks and the onboarding friction caused by asking for too much information, too soon.   Progressive onboarding, dormant fraud, and the friction balance  Progressive onboarding shifts from the one-size-fits-all model by gathering only truly essential information initially and asking for more as customers engage more. This is a direct counterbalance to the approach that sometimes turns customers off by asking for too much too soon, and adding too much friction at initial onboarding. It also helps ensure ongoing checks that fight dormant fraud. We’ve seen this approach (already growing popular in payment processing) be especially useful in every type of financial business. Here’s how it works:  A prospect visits your site to explore options. They may just want to understand fees and get a feel for your offerings. At this stage, you might ask for minimal information — just a name and email — without requiring a full fraud check or credit score. It’s a low commitment ask that keeps things simple for casual prospects who are just browsing, while also keeping your costs low so you don’t spend a full fraud check on an uncommitted visitor.   As the prospect becomes a true customer and begins making small transactions, say a $50 transfer, you request additional details like their date of birth, physical address, or phone number. This minor step-up in information allows for a basic behavioral analytics fraud check while maintaining a low barrier of time and PII-requested for a low-risk activity.  With each new level of engagement and transaction value, the information requested increases accordingly. If the customer wants to transfer larger amounts, like $5,000, they’ll understand the need to provide more details — it aligns with the idea of a privacy trade-off, where the customer’s willingness to share information grows as their trust and need for services increase. Meanwhile, your business allocates resources to those who are fully engaged, rather than to one-time visitors or casual sign-ups, and keeps an eye on dormant fraudsters who might have expected no barrier to additional transactions.  Progressive onboarding is not just an effective approach for dormant fraud and onboarding friction, but also in fighting fraudsters who sneak in through unseen gaps. In another case, we worked with a consumer finance platform to help identify gaps in their fraud stack. In one attack, fraudsters probed until they found the product with the easiest barrier of entry: once inside they went on to immediately commit a full-force bot attack on higher value returns. The attack wasn’t based on dormancy, but on complacency. The fraudsters assumed this consumer finance platform wouldn’t realize that a low controls onboarding for one solution could lead to ease of access to much more. And they were right.  After closing that vulnerability, we helped this customer work to create progressive onboarding that includes behavior-based fraud controls for every single user, including those already with accounts, who had built that assumed trust, and for low-risk entry-points. This weeded out any dormant fraudsters already onboarded who were trying to take advantage of that trust, as they had to go through behavioral analytics and other new controls based on the risk-level of the product.   Behavioral analytics gives you confidence that every customer is trustworthy, from the moment they enter the front door to even after they’ve kicked off their shoes to stay a while.  Behavioral analytics shines a light on shadowy corners  Behavioral analytics are proven beyond just onboarding — within any part of a user interaction, our signals detect low familiarity, high-risk behavior and likely fraud clusters. In our experience, building a progressive onboarding approach with just these two signal points alone would provide significant results — and would help stop sophisticated fraudsters from perpetrating dormant fraud, including large-scale bust outs.  Want to find out how progressive onboarding might work for you? Contact us for a free demo and deep dive into how behavioral analytics can help throughout your user journey.  Contact us for a free demo

Published: December 5, 2024 by Devon Smith

Despite being a decades-old technology, behavioral analytics is often still misunderstood. We’ve heard from fraud, identity, security, product, and risk professionals that exploring a behavior-based fraud solution brings up big questions, such as: What does behavioral analytics provide that I don’t get now? (Quick answer: a whole new signal and an earlier view of fraud) Why do I need to add even more data to my fraud stack? (Quick answer: it acts with your stack to add insights, not overload) How is this different from biometrics? (Quick answer: while biometrics track characteristics, behavioral analytics tracks distinct actions) These questions make sense — stopping fraud is complex, and, of course, you want to do your research to fully understand what ROI any tool will add. NeuroID, now part of Experian, is one of the only behavioral analytics-first businesses built specifically for stopping fraud. Our internal experts have been crafting behavioral-first solutions to detect everything from simple script fraud bots through to generative AI (genAI) attacks. We know how behavioral analytics works best within your fraud stack, and how to think strategically about using it to stop fraud rings, bot fraud, and other third-party fraud attacks. This primer will provide answers to the biggest questions we hear, so you can make the most informed decisions when exploring how our behavioral analytics solutions could work for you. Q1. What is behavioral analytics and how is it different from behavioral biometrics? A common mistake is to conflate behavioral analytics with behavioral biometrics. But biometrics rely on unique physical characteristics — like fingerprints or facial scans — used for automated recognition, such as unlocking your phone with Face ID. Biometrics connect a person’s data to their identity. But behavioral analytics? They don’t look at an identity. They look at behavior and predict risk. While biometrics track who a person is, behavioral analytics track what they do. For example, NeuroID’s behavioral analytics observes every time someone clicks in a box, edits a field, or hovers over a section. So, when a user’s actions suggest fraudulent intent, they can be directed to additional verification steps or fully denied. And if their actions suggest trustworthiness? They can be fast-tracked. Or, as a customer of ours put it: "Using NeuroID decisioning, we can confidently reject bad actors today who we used to take to step-up. We also have enough information on good applicants sooner, so we can fast-track them and say ‘go ahead and get your loan, we don’t need anything else from you.’ And customers really love that." - Mauro Jacome, Head of Data Science for Addi (read the full Addi case study here). The difference might seem subtle, but it’s important. New laws on biometrics have triggered profound implications for banks, businesses, and fraud prevention strategies. The laws introduce potential legal liabilities, increased compliance costs, and are part of a growing public backlash over privacy concerns. Behavioral signals, because they don’t tie behavior to identity, are often easier to introduce and don’t need the same level of regulatory scrutiny. The bottom line is that our behavioral analytics capabilities are unique from any other part of your fraud stack, full-stop. And it's because we don’t identify users, we identify intentions. Simply by tracking users’ behavior on your digital form, behavioral analytics powered by NeuroID tells you if a user is human or a bot; trustworthy or risky. It looks at each click, edit, keystroke, pause, and other tiny interactions to measure every users’ intention. By combining behavior with device and network intelligence, our solutions provide new visibility into fraudsters hiding behind perfect PII and suspicious devices. The result is reduced fraud costs, fewer API calls, and top-of-the-funnel fraud capture with no tuning or model integration on day one. With behavioral analytics, our customers can detect fraud attacks in minutes, instead of days. Our solutions have proven results of detecting up to 90% of fraud with 99% accuracy (or <1% false positive rate) with less than 3% of your population getting flagged. Q2. What does behavioral analytics provide that I don’t get now? Behavioral analytics provides a net-new signal that you can’t get from any other tools. One of our customers, Josh Eurom, Manager of Fraud for Aspiration Banking, described it this way: “You can quantify some things very easily: if bad domains are coming through you can identify and stop it. But if you see things look odd, yet you can’t set up controls, that’s where NeuroID behavioral analytics come in and captures the unseen fraud.” (read the full Aspiration story here) Adding yet another new technology with big promises may not feel urgent. But with genAI fueling synthetic identity fraud, next-gen fraud bots, and hyper-efficient fraud ring attacks, time is running out to modernize your stack. In addition, many fraud prevention tools today only focus on what PII is submitted — and PII is notoriously easy to fake. Only behavioral analytics looks at how the data is submitted. Behavioral analytics is a crucial signal for detecting even the most modern fraud techniques. Watch our webinar: The Fraud Bot Future-Shock: How to Spot and Stop Next-Gen Attacks  Q3. Why do I need to add even more data to my fraud stack? Balancing fraud, friction, and financial impact has led to increasingly complex fraud stacks that often slow conversions and limit visibility. As fraudsters evolve, gaps grow between how quickly you can keep up with their new technology. Fraudsters have no budget constraints, compliance requirements, or approval processes holding them back from implementing new technology to attack your stack, so they have an inherent advantage. Many fraud teams we hear from are looking for ways to optimize their workflows without adding to the data noise, while balancing all the factors that a fraud stack influences beyond overall security (such as false positives and unnecessary friction). Behavioral analytics is a great way to work smarter with what you have. The signals add no friction to the onboarding process, are undetectable to your customers, and live on a pre-submit level, using data that is already captured by your existing application process. Without requiring any new inputs from your users or stepping into messy biometric legal gray areas, behavioral analytics aggregates, sorts, and reviews a broad range of cross-channel, historical, and current customer behaviors to develop clear, real-time portraits of transactional risks. By sitting top-of-funnel, behavioral analytics not only doesn’t add to the data noise, it actually clarifies the data you currently rely on by taking pressure off of your other tools. With these insights, you can make better fraud decisions, faster. Or, as Eurom put it: “Before NeuroID, we were not automatically denying applications. They were getting an IDV check and going into a manual review. But with NeuroID at the top of our funnel, we implemented automatic denial based on the risky signal, saving us additional API calls and reviews. And we’re capturing roughly four times more fraud. Having behavioral data to reinforce our decision-making is a relief.” The behavioral analytics difference Since the world has moved online, we’re missing the body language clues that used to tell us if someone was a fraudster. Behavioral analytics provides the digital body language differentiator. Behavioral cues — such as typing speed, hesitation, and mouse movements — highlight riskiness. The cause of that risk could be bots, stolen information, fraud rings, synthetic identities, or any combination of third-party fraud attack strategies. Behavioral analytics gives you insights to distinguish between genuine applicants and potentially fraudulent ones without disrupting your customer’s journey. By interpreting behavioral patterns at the very top of the onboarding funnel, behavior helps you proactively mitigate fraud, reduce false positives, and streamline onboarding, so you can lock out fraudsters and let in legitimate users. This is all from data you already capture, simply tracking interactions on your site. Stop fraud, faster: 5 simple uses where behavioral analytics shine  While how you approach a behavioral analytics integration will vary based on numerous factors, here are some of the immediate, common use cases of behavioral analytics.  Detecting fraud bots and fraud rings Behavioral analytics can identify fraud bots by their frameworks, such as Puppeter or Stealth, and through their behavioral patterns, so you can protect against even the most sophisticated fourth-generation bots. NeuroID provides holistic coverage for bot and fraud ring detection — passively and with no customer friction, often eliminating the need for CAPTCHA and reCAPTCHA. With this data alone, you could potentially blacklist suspected fraud bot and fraud ring attacks at the top of the fraud prevention funnel, avoiding extra API calls. Sussing out scams and coercions When users make account changes or transactions under coercion, they often show unfamiliarity with the destination account or shipping address entered. Our real-time assessment detects these risk indicators, including hesitancy, multiple corrections, and slow typing, alerting you in real-time to look closer. Stopping use of compromised cards and stolen IDs Traditional PII methods can fall short against today’s sophisticated synthetic identity fraud. Behavioral analytics uncovers synthetic identities by evaluating how PII is entered, instead of relying on PII itself (which is often corrupted). For example, our behavioral signals can assess users’ familiarity with the billing address they’re entering for a credit card or bank account. Genuine account holders will show strong familiarity, while signs of unfamiliarity are indicators of an account under attack. Detecting money mules Our behavioral analytics solutions track how familiar users are with the addresses they enter, conducting a real-time, sub-millisecond familiarity assessment. Risk markers such as hesitancy, multiple corrections, slow typing speed raise flags for further exploration. Stopping promotion and discount abuse Our behavioral analytics identifies risky versus trustworthy users in promo and discount fields. By assessing behavior, device, and network risk, we help you determine if your promotions attract more risky than trustworthy users, preventing fraudsters from abusing discounts. Learn more about our behavioral analytics solutions. Learn more Watch webinar

Published: November 21, 2024 by Allison Lemaster

  With cyber threats intensifying and data breaches rising, understanding how to respond to incidents is more important than ever. In this interview, Michael Bruemmer, Head of Global Data Breach Resolution at Experian, is joined by Matthew Meade, Chair of the Cybersecurity, Data Protection & Privacy Group at Eckert Seamans, to discuss the realities of data breach response. Their session, “Cyber Incident Response: A View from the Trenches,” brings insights from the field and offers a preview of Experian's 2025 Data Breach Industry Forecast, including the role of generative artificial intelligence (AI) in data breaches. From the surge in business email compromises (BEC) to the relentless threat of ransomware, Bruemmer and Meade dive into key issues facing organizations big and small today. Drawing from Experian's experience handling nearly 5,000 breaches this year, Bruemmer sheds light on effective response practices and reveals common pitfalls. Meade, who served as editor-in-chief for the Sedona Conference’s new Model Data Breach Notification Law, explains the implications of these regulatory updates for organizations and highlights how standardized notification practices can improve outcomes. Bruemmer and Meade’s insights offer a proactive guide to tackling tomorrow’s cyber threats, making it a must-listen for anyone aiming to stay one step ahead. Listen to the full interview for a valuable look at both the current landscape and what's next.  Click here for more insight into safeguarding your organization from emerging cyber threats.  

Published: November 20, 2024 by Julie Lee

Subscribe to our blog

Enter your name and email for the latest updates.

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

Subscribe to our Experian Insights blog

Don't miss out on the latest industry trends and insights!
Subscribe