Loading...

Four GEN AI fraud trends to watch in 2024

January 17, 2024 by Mihail Blagoev, Solution Strategy Analyst, Global Identity & Fraud

We explore four fraud trends likely to be influenced the most by GEN AI technology in 2024, and what businesses can do to prevent them.

2023: The rise of Generative AI

2023 was marked by the rise of Generative Artificial Intelligence (GEN AI), with the technology’s impact (and potential impact) reverberating across businesses around the world. 2023 also witnessed the democratisation of GEN AI, with its usage made publicly available through multiple apps and tools such as Open AI’s Chat GPT and DALL·E, Google’s Bard, Midjourney, and many others. Chat GPT even held the world record for the fastest growing application in history (until it was surpassed by Threads) after reaching 100 million users in January 2023, just less than 2 months after its launch.

The profound impact of GEN AI on everyday life is also reflected in the 2023 Word of the Year (WOTY) lists published by some of the biggest dictionaries in the world. Merriam-Webster’s WOTY for 2023 was ‘authentic’— a term that people are thinking about, writing about, aspiring to, and judging more than ever. It’s also not a surprise that one of the other words outlined by the dictionary was ‘deepfake’, referencing the importance of GEN AI-inspired technology over the past 12 months. Among other dictionaries that publish WOTY lists, both Cambridge Dictionary and Dictionary.com chose ‘hallucinate’ – with new definitions of the verb describing false information produced by AI tools being presented as truth or fact. A finalist in the Oxford list was the word ‘prompt’, referencing the instructions that are given to AI algorithms to influence the content it generates. Finally, Collins English Dictionary announced ‘AI’ as their WOTY to illustrate the significance of the technology throughout 2023.

GEN AI has many potential positive applications from streamlining business processes, providing creative support for various industries such as architecture, design, or entertainment, to significantly impacting healthcare or education. However, as signalled out by some of the WOTY lists, it also poses many risks.

One of the biggest threats is its adoption by criminals to generate synthetic content that has the potential to deceive businesses and individuals. Unfortunately, easy-to-use, and widely available GEN AI tools have also created a low entrance point for those willing to commit illegal activities. Threat actors leverage GEN AI to produce convincing deepfakes that include audio, images, and videos that are increasingly sophisticated and practically impossible to differentiate from genuine content without the help of technology. They are also exploiting the power of Large Language Models (LLMs) by creating eloquent chatbots and elaborate phishing emails to help them steal important information or establish initial communication with their targets.

GEN AI fraud trends to watch out for in 2024

As the lines between authentic and synthetic blur more than ever before, here are four fraud trends likely to be influenced most by GEN AI technology in 2024.

  1. A staggering rise in bogus accounts: (impacted by: deepfakes, synthetic PII)
    Account opening channels will continue to be impacted heavily by the adoption of GEN AI. As criminals try to establish presence in social media and across business channels (e.g., LinkedIn) in an effort to build trust and credibility to carry out further fraudulent attempts, this threat will expand way beyond the financial services industry. GEN AI technology continues to evolve, and with the imminent emergence of highly convincing real-time audio and video deepfakes, it will give fraudsters even better tools to attempt to bypass document verification systems, biometric and liveness checks. Additionally, they could scale their registration attempts by generating synthetic PII data such as names, addresses, emails, or national identification numbers.
  2. Persistent account takeover attempts carried out through a variety of channels: (impacted by: deepfakes, GEN AI generated phishing emails)
    The advancements in deepfakes present a big challenge to institutions with inferior authentication defenses. Just like with the account opening channel, fraudsters will take advantage of new developments in deepfake technology to try to spoof authentication systems with voice, images, or video deepfakes, depending on the required input form to gain access to an account. Furthermore, criminals could also try to fool customer support teams to help them regain access they claim to have lost. Finally, it’s likely that the biggest threat would be impersonation attempts (e.g., criminals pretending to be representatives of financial institutions or law enforcement) carried out against individuals to try to steal access details directly from them. This could also involve the use of sophisticated GEN AI generated emails that look like they are coming from authentic sources.
  3. An influx of increasingly sophisticated Authorised Push Payment fraud attempts: (impacted by: deepfakes, GEN AI chatbots, GEN AI generated phishing emails)
    Committing social engineering scams has never been easier. Recent advancements in GEN AI have given threat actors a handful of new ways to deceive their victims. They can now leverage deepfake voices, images, and videos to be used in crimes such as romance scams, impersonation scams, investment scams, CEO fraud, or pig butchering scams. Unfortunately, deepfake technology can be applied to multiple situations where a form of genuine human interaction might be needed to support the authenticity of the criminals’ claims. Fraudsters can also bolster their cons with GEN AI enabled chatbots to engage potential victims and gain their trust. If that isn’t enough, phishing messages have been elevated to new heights with the help of LLM tools that have helped with translations, grammar, and punctuation, making these emails look more elaborate and trustworthy than ever before.
  4. A whole new world of GEN AI Synthetic Identity: (impacted by: deepfakes, synthetic PII)
    This is perhaps the biggest fraud threat that could impact financial institutions for years to come. GEN AI has made the creation of synthetic identities easier and more convincing than ever before. GEN AI tools give fraudsters the ability to generate fake PII data at scale with just a few prompts. Furthermore, criminals can leverage fabricated deepfake images of people that never existed to create synthetic identities from entirely bogus content. Unfortunately, since synthetic identities take time to be discovered and are often wrongly classified as defaults, the effect of GEN AI on this type of fraud will be felt for a long time.

How to prevent GEN AI related fraud

As GEN AI technology continues to evolve in 2024, its adoption by fraud perpetrators to carry out illegal activities will too. Institutions should be aware of the dangers they possess and equip themselves with the right tools and processes to tackle these risks. Here are a few suggestions on how this can be achieved:

Fight GEN AI with GEN AI: One of the biggest advantages of GEN AI is that while it is being trained to create synthetic data, it can also be trained to spot it successfully. One such approach is supported by Generative Adversarial Networks (GANs) that employ two neural networks competing against each other — a generator and a discriminator. The generator creates synthetic data, while the discriminator evaluates the generated data and tries to distinguish between real and fake samples. Over time, both networks fine tune themselves, and the discriminator becomes increasingly successful in recognising synthetic content. Other algorithms used to create deepfakes, such as Convolutional Neural Networks (CNNs), Recurrent Neural Networks (RNNs), and Autoencoders, can also be trained to spot anomalies in audio, images, and video, such as inconsistencies in facial movements or features, inconsistencies in lighting or background, unnatural movements or flickering, and audio discrepancies. Finally, a hybrid approach that combines multiple algorithms often presents more robust results.

Advanced analytics to monitor the whole customer journey and beyond: Institutions should deploy a fraud solution that leverages data from a variety of tools that can spot irregular activity across the whole customer journey. That could be a risky activity, such as a spike in suspicious registrations or authentication attempts, unusual consumer behaviour, irregular login locations, suspicious device or browser data, or abnormal transaction activity. A best-in-class solution would give institutions the ability to monitor and analyse trends that go beyond a single transaction or account. Ideally, that means monitoring for fraud signals happening both within a financial institution’s environment and across the industry. This should allow businesses to discover signals pointing out fraudulent activity previously not seen within their systems or data points that would otherwise be considered safe, thus allowing them to develop new fraud prevention models and more comprehensive strategies.

Fraud data sharing: Sharing of fraud data across multiple organisations can help identify and spot new fraud trends from occurring within an instruction’s premises and stop risky transactions early.

Educate consumers: While institutions can deploy multiple tools to monitor GEN AI related fraud, regular consumers don’t have the same advantage and are particularly susceptible to impersonation attempts, among other deepfake or GEN AI related cons. While they can’t be equipped with the right tools to recognize synthetic content, educating consumers on how to react in certain situations related to giving out valuable personal or financial information is an important step in helping them to remain con free.

Learn more with our latest fraud reports from across the globe:

Subscribe to our blog

Enter your name and email for the latest updates.

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

Quadrant 2023 SPARK Matrix