In early 2025, European authorities shut down a cybercriminal operation called JokerOTP, responsible for over 28,000 phishing attacks across 13 countries. According to Forbes, the group used one-time password (OTP) bots to bypass two-factor authentication (2FA), netting an estimated $10 million in fraudulent transactions. It's just one example of how fraudsters are exploiting digital security gaps with AI and automation. What is an OTP bot? An OTP bot is an automated tool designed to trick users into revealing their one-time password, a temporary code used in multifactor authentication (MFA). These bots are often paired with stolen credentials, phishing sites or social engineering to bypass security steps and gain unauthorized access. Here’s how a typical OTP bot attack works: A fraudster logs in using stolen credentials. The user receives an OTP from their provider. Simultaneously, the OTP bot contacts the user via SMS, call or email, pretending to be the institution and asking for the OTP. If the user shares the OTP, the attacker gains control of the account. The real risk: account takeover OTP bots are often just one part of a larger account takeover strategy. Once a bot bypasses MFA, attackers can: Lock users out of their accounts Change contact details Drain funds or open fraudulent lines of credit Stopping account takeover means detecting and disrupting the attack before access is gained. That’s where strong account takeover/login defense becomes critical, monitoring suspicious login behaviors and recognizing high-risk signals early. How accessible are OTP bots? Mentions of OTP bots on dark web forums jumped 31% in 2024. Bot services offering OTP bypass tools were being sold for just $10 to $50 per attack. One user on a Telegram-based OTP bot platform reported earning $50,000 in a month. The barrier to entry for fraudsters is low, and these figures highlight just how easy and profitable it is to launch OTP bot attacks at scale. The evolution of fraud bots OTP bots are one part of the rising wave of fraud bots. According to our report, The Fraud Attack Strategy Guide, bots accounted for 30% of fraud attempts at the beginning of 2024. By the end of the year, that number had risen to 80% — a nearly threefold increase in just 12 months. Today’s fraud bots are more dynamic and adaptive than before. They go beyond simple scripts, mimicking human behavior, shifting tactics in real time and launching large-scale bot attacks across platforms. Some bypass OTPs entirely or refine their tactics with each failed attempt. With generative AI in the mix, bot-based fraud is getting faster, cheaper and harder to detect. Effective fraud defense now depends on detecting intent, analyzing behavior in real time and stopping threats earlier in the process. Read this blog: Learn more about identifying and stopping bot attacks. A cross-industry problem OTP bots can target any organization that leverages 2FA, but the impact varies by sector. Financial services, fintech and buy now, pay later (BNPL) providers are top targets for OTP bot attacks due to high-value accounts, digital onboarding and reliance on 2FA. In one case outlined in The Fraud Strategy Attack Guide, a BNPL provider saw 25,000+ bot attempts in 90 days, with over 3,000 bots completing applications, bypassing OTP or using synthetic identities. Retail and e-commerce platforms face attacks designed to take over customer accounts and make unauthorized purchases using stored payment methods, gift cards or promo credits. OTP bots can help fraudsters trigger and intercept verification codes tied to checkout or login flows. Healthcare and education organizations can be targeted for their sensitive data and widespread use of digital portals. OTP bots can help attackers access patient records, student or staff accounts, or bypass verification during intake and application flows, leading to phishing, insurance fraud or data theft. Government and public sector entities are increasingly vulnerable as fraudsters exploit digital services meant for public benefits. OTP bots may be used to sign up individuals for disbursements or aid programs without their knowledge, enabling fraudsters to redirect payments or commit identity theft. This abuse not only harms victims but also undermines trust in the public system. Across sectors, the message is clear: the bots are getting in too far before being detected. Organizations across all industries need the ability to recognize bot risk at the very first touchpoint; the earlier the better. The limitations of OTP defense OTP is a strong second factor, but it’s not foolproof. If a bot reaches the OTP stage, it's highly likely that they've already: Stolen or purchased valid credentials Found a way to trigger the OTP Put a social engineering play in motion Fighting bots earlier in the funnel The most effective fraud prevention doesn’t just react to bots at the OTP step; it stops them before they trigger OTPs in the first place. But to do that, you need to understand how modern bots operate and how our bot detection solutions, powered by NeuroID, fight back. The rise of GenAI-powered bots Bot creation has become dramatically easier. Thanks to generative AI and widely available bot frameworks, fraudsters no longer need deep technical expertise to launch sophisticated attacks. Today’s Gen4 bots can simulate human-like interactions such as clicks, keystrokes, and mouse movements with just enough finesse to fool traditional bot detection tools. These bots are designed to bypass security controls, trigger OTPs, complete onboarding flows, and even submit fraudulent applications. They are built to blend in. Detecting bots across two key dimensions Our fraud detection solutions are purpose-built to uncover these threats by analyzing risk signals across two critical dimensions. 1. Behavioral patternsEven the most advanced bots struggle to perfectly mimic human behavior. Our tools analyze thousands of micro-signals to detect deviations, including: Mouse movement smoothness and randomness Typing cadence, variability and natural pauses Field and page transition timing Cursor trajectory and movement velocity Inconsistent or overly “perfect” interaction patterns By identifying unnatural rhythms or scripted inputs, we can distinguish real users from automation before the OTP step. 2. Device and network intelligenceIn parallel, our technology examines device and network indicators that often reveal fraud at scale: Detection of known bot frameworks and automation tools Device fingerprinting to flag repeat offenders Link analysis connecting devices across multiple sessions or identities IP risk, geolocation anomalies and device emulation signals This layered approach helps identify fraud rings and coordinated bot attacks, even when attackers attempt to mask their activity. A smarter way to stop bots We offer both a highly responsive, real-time API for instant bot detection and a robust dashboard for investigative analytics. This combination allows fraud teams to stop bots earlier in the funnel — before they trigger OTPs, fill out forms, or submit fake credentials — and to analyze emerging trends across traffic patterns. Our behavioral analytics, combined with device intelligence and adaptive risk modeling, empowers organizations to act on intent rather than just outcomes. Good users move forward without friction. Bad actors are stopped at the source. Ready to stop bots in their tracks? Explore Experian’s fraud prevention services. Learn more *This article includes content created by an AI language model and is intended to provide general information.
Bot fraud has long been a major concern for digital businesses, but evolving attacks at all stages in the customer lifecycle have overshadowed an ever-present issue: click fraud. Click fraud is a cross-departmental challenge for businesses, and stopping it requires a level of insight and understanding that many businesses don’t yet have. It’s left many fraud professionals asking: What is click fraud? Why is it so dangerous? How can it be prevented? What is click fraud? A form of bot fraud, click fraud occurs when bots drive fraudulent clicks to websites, digital ads, and emails. Click fraud typically exploits application flows or digital advertising; traffic from click bots appears to be genuine but is actually fraudulent, incurring excessive costs through API calls or ad clicks. These fraudulent clicks won’t result in any sales but will reveal sensitive information, inflate costs, and clutter data. What is the purpose of click fraud? It depends on the target. We've seen click bots begin (but not complete) insurance quotes or loan applications, gathering information on competitors’ rates. In other cases, fraudsters use click fraud to drive artificial clicks to ads on their sites, resulting in increased revenue from PPC/CPC advertising. The reasons behind click fraud vary widely, but, regardless of its intent, the impacts of it affect businesses deeply. The dangers of click fraud On the surface, click fraud may seem less harmful than other types of fraud. Unlike application fraud and account takeover fraud, consumers’ data isn’t being stolen, and fraud losses are relatively minuscule. But click fraud can still be detrimental to businesses' bottom lines: every API call incurred by a click bot is an additional expense, and swarms of click bots distort data that’s invaluable to fraud attack detection and customer acquisition. The impact of click fraud extends beyond that, though. Not only can click bots gather sensitive data like insurance quotes, but click fraud can also be a gateway to more insidious fraud schemes. Fraud rings are constantly looking for vulnerabilities in businesses’ systems, often using bots to probe for back-door entrances to applications and ways to bypass fraud checks. For example: if an ad directs to an unlisted landing page that provides an alternate entry to a business’s ecosystem, fraudsters can identify this through click fraud and use bots to find vulnerabilities in the alternate application process. In doing so, they lay the groundwork for larger attacks with more tangible losses. Keys to click fraud prevention Without the right tools in place, modern bots can appear indistinguishable from humans — many businesses struggle to identify increasingly sophisticated bots on their websites as a result. Allowing click fraud to remain undetected can make it extremely difficult to know when a more serious fraud attack is at your doorstep. Preventing click fraud requires real-time visibility into your site’s traffic, including accurate bot detection and analysis of bot behavior. It’s one of many uses for behavioral analytics in fraud detection: behavioral analytics identifies advanced bots pre-submit, empowering businesses to better differentiate click fraud from genuine traffic and other fraud types. With behavioral analytics, bot attacks can be detected and stopped before unnecessary costs are incurred and sensitive information is revealed. Learn more about our behavioral analytics for fraud detection.
Bots have been a consistent thorn in fraud teams’ side for years. But since the advent of generative AI (genAI), what used to be just one more fraud type has become a fraud tsunami. This surge in fraud bot attacks has brought with it: A 108% year-over-year increase in credential stuffing to take over accounts1 A 134% year-over-year increase in carding attacks, where stolen cards are tested1 New account opening fraud at more than 25% of businesses in the first quarter of 2024 While fraud professionals rush to fight back the onslaught, they’re also reckoning with the ever-evolving threat of genAI. A large factor in fraud bots’ new scalability and strength, genAI was the #1 stress point identified by fraud teams in 2024, and 70% expect it to be a challenge moving forward, according to Experian’s U.S. Identity and Fraud Report. This fear is well-founded. Fraudsters are wasting no time incorporating genAI into their attack arsenal. GenAI has created a new generation of fraud bot tools that make bot development more accessible and sophisticated. These bots reverse-engineer fraud stacks, testing the limits of their targets’ defenses to find triggers for step-ups and checks, then adapt to avoid setting them off. How do bot detection solutions fare against this next generation of bots? The evolution of fraud bots The earliest fraud bots, which first appeared in the 1990s2 , were simple scripts with limited capabilities. Fraudsters soon began using these scripts to execute basic tasks on their behalf — mainly form spam and light data scraping. Fraud teams responded, implementing bot detection solutions that continued to evolve as the threats became more sophisticated. The evolution of fraud bots was steady — and mostly balanced against fraud-fighting tools — until genAI supercharged it. Today, fraudsters are leveraging genAI’s core ability (analyzing datasets and identifying patterns, then using those patterns to generate solutions) to create bots capable of large-scale attacks with unprecedented sophistication. These genAI-powered fraud bots can analyze onboarding flows to identify step-up triggers, automate attacks at high-volume times, and even conduct “behavior hijacking,” where bots record and replicate the behaviors of real users. How next-generation fraud bots beat fraud stacks For years, a tried-and-true tool for fraud bot detection was to look for the non-human giveaways: lightning-fast transition speeds, eerily consistent keystrokes, nonexistent mouse movements, and/or repeated device and network data were all tell-tale signs of a bot. Fraud teams could base their bot detection strategies off of these behavioral red flags. Stopping today’s next-generation fraud bots isn’t quite as straightforward. Because they were specifically built to mimic human behavior and cycle through device IDs and IP addresses, today’s bots often appear to be normal, human applicants and circumvent many of the barriers that blocked their predecessors. The data the bots are providing is better, too3, fraudsters are using genAI to streamline and scale the creation of synthetic identities.4 By equipping their human-like bots with a bank of high-quality synthetic identities, fraudsters have their most potent, advanced attack avenue to date. Skirting traditional bot detection with their human-like capabilities, next-generation fraud bots can bombard their targets with massive, often undetected, attacks. In one attack analyzed by NeuroID, a part of Experian, fraud bots made up 31% of a business's onboarding volume on a single day. That’s nearly one-third of the business’s volume comprised of bots attempting to commit fraud. If the business hadn’t had the right tools in place to separate these bots from genuine users, they wouldn’t have been able to stop the attack until it was too late. Beating fraud bots with behavioral analytics: The next-generation approach Next-generation fraud bots pose a unique threat to digital businesses: their data appears legitimate, and they look like a human when they’re interacting with a form. So how do fraud teams differentiate fraud bots from an actual human user? NeuroID’s product development teams discovered key nuances that separate next-generation bots from humans, and we’ve updated our industry-leading bot detection capabilities to account for them. A big one is mousing patterns: random, erratic cursor movements are part of what makes next-generation bots so eerily human-like, but their movements are still noticeably smoother than a real human’s. Other bot detection solutions (including our V1 signal) wouldn’t flag these advanced cursor movements as bot behavior, but our new signal is designed to identify even the most granular giveaways of a next-generation fraud bot. Fraud bots will continue to evolve. But so will we. For example, behavioral analytics can identify repeated actions — down to the pixel a cursor lands on — during a bot attack and block out users exhibiting those behaviors. Our behavior was built specifically to combat next-gen challenges with scalable, real-time solutions. This proactive protection against advanced bot behaviors is crucial to preventing larger attacks. For more on fraud bots’ evolution, download our Emerging Trends in Fraud: Understanding and Combating Next-Gen Bots report. Learn more Sources 1 HUMAN Enterprise Bot Fraud Benchmark Report 2 Abusix 3 NeuroID 4 Biometric Update
U.S. federal prosecutors have indicted Michael Smith of North Carolina for allegedly orchestrating a $10 million fraud scheme involving AI-generated music. Smith is accused of creating fake bands and using AI tools to produce hundreds of tracks, which were streamed by fake listeners on platforms like Spotify, Apple Music, and Amazon Music. Despite the artificial engagement, the scheme generated real royalty payments, defrauding these streaming services. This case marks the first prosecution of its kind and highlights a growing financial risk: the potential for rapid, large-scale fraud in digital platforms when content and engagement can be easily fabricated. A new report from Imperva Inc. highlights the growing financial burden of unsecure APIs and bot attacks on businesses, costing up to $186 billion annually. Key findings highlight the heavy economic burden on large companies due to their complex and extensive API ecosystems, often unsecured. Last year, enterprises managed about 613 API endpoints on average, a number expected to grow, increasing associated risks. APIs exposure to bot attacks Bot attacks, similar to those seen in streaming fraud, are also plaguing financial institutions. The risks are significant, weakening both security and financial stability. 1. Fraudulent transactions and account takeover Automated fraudulent transactions: Bots can perform high volumes of small, fraudulent transactions across multiple accounts, causing financial loss and overwhelming fraud detection systems. Account takeover: Bots can attempt credential stuffing, using compromised login data to access user accounts. Once inside, attackers could steal funds or sensitive information, leading to significant financial and reputational damage. 2. Synthetic identity fraud Creating fake accounts: Bots can be used to generate large numbers of synthetic identities, which are then used to open fake accounts for money laundering, credit fraud, or other illicit activities. Loan or credit card fraud: Using fake identities, bots can apply for loans or credit cards, withdrawing funds without intent to repay, resulting in significant losses for financial institutions. 3. Exploiting API vulnerabilities API abuse: Just as bots exploit API endpoints in streaming services, they can also target vulnerable APIs in financial platforms to extract sensitive data or initiate unauthorized transactions, leading to significant data breaches. Data exfiltration: Bots can use APIs to extract financial data, customer details, and transaction records, potentially leading to identity theft or data sold on the dark web. Bot attacks targeting financial institutions can result in extensive fraud, data breaches, regulatory fines, and loss of customer trust, causing significant financial and operational consequences. Safeguarding financial integrity To safeguard your business from these attacks, particularly via unsupervised APIs, a multi-layered defense strategy is essential. Here’s how you can protect your business and ensure its financial integrity: 1. Monitor and analyze data patterns Real-time analytics: Implement sophisticated monitoring systems to track user behavior continuously. By analyzing user patterns, you can detect irregular spikes in activity that may indicate bot-driven attacks. These anomalies should trigger alerts for immediate investigation. AI, machine learning, and geo-analysis: Leverage AI and machine learning models to spot unusual behaviors that can signal fraudulent activity. Geo-analysis tools help identify traffic originating from regions known for bot farms, allowing you to take preventive action before damage occurs. 2. Strengthen API access controls Limit access with token-based authentication: Implement token-based authentication to limit API access to verified applications and users. This reduces the chances of unauthorized or bot-driven API abuse. Control third-party integrations: Restrict API access to only trusted and vetted third-party services. Ensure that each external service is thoroughly reviewed to prevent malicious actors from exploiting your platform. 3. Implement robust account creation procedures PII identity verification solutions: Protect personal or sensitive data through authenticating someone`s identity and helping to prevent fraud and identity theft. Email and phone verification: Requiring email or phone verification during account creation can minimize the risk of mass fake account generation, a common tactic used by bots for fraudulent activities. Combating Bots as a Service: Focusing on intent-based deep behavioral analysis (IDBA), even the most sophisticated bots can be spotted, without adding friction. 4. Establish strong anti-fraud alliances Collaborate with industry networks: Join industry alliances or working groups that focus on API security and fraud prevention. Staying informed about emerging threats and sharing best practices with peers will allow you to anticipate new attack strategies. 5. Continuous customer and account monitoring Behavior analysis for repeat offenders: Monitor for repeat fraudulent behavior from the same accounts or users. If certain users or transactions display consistent signs of manipulation, flag them for detailed investigation and potential restrictions. User feedback loops: Encourage users to report any suspicious activity. This crowd-sourced intelligence can be invaluable in identifying bot activity quickly and reducing the scope of damage. 6. Maintain transparency and accountability Audit and report regularly: Offer regular, transparent reports on API usage and your anti-fraud measures. This builds trust with stakeholders and customers, as they see your proactive steps toward securing the platform. Real-time dashboards: Provide users with real-time visibility into their data streams or account activities. Unexplained spikes or dips can be flagged and investigated immediately, providing greater transparency and control. Conclusion Safeguarding your business from bot attacks and API abuse requires a comprehensive, multi-layered approach. By investing in advanced monitoring tools, enforcing strict API access controls, and fostering collaboration with anti-fraud networks, your organization can mitigate the risks posed by bots while maintaining credibility and trust. The right strategy will not only protect your business but also preserve the integrity of your platform. Learn more