Monthly Archives: December 2023

Loading...

Lenders prioritise automation above all, according to research. In a study conducted by Forrester Consulting on behalf of Experian, we surveyed 660 and interviewed 60 decision makers for technology purchases that support the credit lifecycle at their financial services organisation. The study included businesses across North America, UK and Ireland, and Brazil. Research from Forrester on behalf of Experian found that automation is the top priority for businesses, and regardless of the specific industry or region, decision-makers consistently identified it as an important area of focus, and the biggest challenge. Lenders are using automation across the credit lifecycle and intend to invest further in the next 12 months, but there are multiple barriers to enhancing automation. We look at the use cases for automation and address the key challenges lenders face when automating decisions. The automation agenda The interpretation and application of automation vary hugely across the maturity spectrum of businesses in our research. While some companies consider automation as a means of simplifying tasks, such as the transition from manual processes to electronic spreadsheets, others are embracing its more advanced forms, such as AI-powered models. Use cases for automation in lending Customer service chatbots using Natural Language Processing (NPL) combined with Robotic Process Automation system (RPA). Remote verification of customers using machine vision and RPA to cross-check data. Data governance - data cleansing of personal information from within data using RPA and NPL. Operational efficiencies using process mining and AI to identify automation opportunities. Credit and fraud risk decisioning, using machine learning. Automation is about making processes as slick and robust as possible, giving the consumer a rapid journey so they can get processed very quickly, while behind the scenes lenders are making the best possible, compliant decisions, that protect them from losses around both credit risk and fraud.Neil Stephenson, Vice President of Experian SOFTWARE SOLUTIONS CONSULTANCY The changing face of automated decision-making in line with rapid tech advancements makes the use of automation by lenders a more complex opportunity than most. On one side there is the chance to enhance models with AI-powered tools to take away manual and subjective decision-making from processes. On the other, there’s the issue of governance and compliance – how to explain models that remove humans altogether. Introducing automation into some parts of the credit lifecycle isn’t always straightforward. Customer management has benefited from a lot of investment in the automation space over the years, particularly Natural Language Processing (NLP), but according to our research, the priority for business investment for Robotic Process Automation (RPA) in the next 12 months is originations. With onboarding playing such a key role in both customer experience and portfolio growth, businesses are looking to enhance this part of the credit lifecycle with automation. Customer experience is driving growth Automation plays a pivotal role in improving the customer journey and experience. The research showed that enhancing customer experience ranked even higher than growth as a priority for many organisations. As businesses strive to deliver seamless and personalised interactions, automation provides the necessary foundation for digital success, which in turn can strengthen competitiveness while retaining valuable customers. "Strategically investing in automation offers businesses the opportunity to scale operations, with a primary focus on growth. In times of economic uncertainty, more targeted, customer-centric strategies, that encompass more accurate predictive models, built on up-to-date samples and executed rapidly, can help mitigate a higher-risk lending backdrop." says Neil Stephenson, Vice President of Experian Software Solutions Consultancy. "Customer experience is the battleground for businesses, where they compete to deliver the best digital journeys in the market. It's a battleground that isn’t just about increasing revenue – the market perception of an organisation can be as important as growth in some portfolios because businesses have a reputation to protect." Automating decisions can ensure customer experience is truly seamless, but businesses face multiple barriers when it comes to credit and risk decision automation. Reducing referred applications  From scoring regression models to the development of machine learning models, better and smarter analytics are critical to drive the processes responsible for making application decisions. Reducing referred applications in turn decreases the need for manual intervention. By minimising the volume of applications in the middle of the credit score, lenders have a clearer and ultimately more automated approach to application accepts and declines. We interviewed decision-makers to understand the numerous challenges faced by lenders when automating decisions: Increasing data sources to allow for a more complete picture of the consumer Improved data quality, and increased volume of data Prevention of model bias The complexity of consumer type attached to some products Redundancy in data input and analytics Training across key roles for a better understanding of automation capabilities Explaining decisions based on machine learning models to regulators Complex fraud referral processes For many respondents, automation is about accuracy and efficiency. By improving automation, there are fewer instances of errors and delays. To ensure scalability can exist in consistent, compliant, and accurate processes that work for both the business and the consumer, here are 10 tips to help tackle the challenges faced by lenders when it comes to automating decisions: Embrace advanced data aggregation tools and technologies that can efficiently collect and integrate data from various sources. Partner with known, trustworthy data providers to enrich datasets. Explore the use of no-code data management tools that allow users to add and remove data sources more quickly and easily. Implement data quality processes. Regularly audit and clean data to remove inconsistencies. Move to cloud-based solutions for scalable data storage and processing of very large datasets. Regularly audit (monitor) machine learning models for bias.  Eliminating sampling bias is not yet possible but using a range of datasets (samples) and various sampling techniques will ensure representation across different demographics to help minimise bias. Develop specific models for different consumer segments or product categories. Regularly update models based on evolving consumer trends and behaviours. Conduct a thorough analysis of data inputs and streamline redundant variables. Use feature selection techniques such as correlations, weight of evidence, and information value to identify the most relevant information. Foster a culture of continuous learning and collaboration for all key stakeholders involved in the credit decisioning and strategy process. Develop transparent models with explainable features. Use interpretable machine learning algorithms that allow for clear explanations of decision factors at the customer level. Streamline identity verification processes by using smart orchestration to reduce false positives and prevent fraud. More on automated decision-making from PowerCurve – North America More on automated decision-making from PowerCurve – UK Related content: Digital decisioning

Published: December 18, 2023 by Managing Editor, Experian Software Solutions

Authorised Push Payment fraud is growing, and as regulators begin to take action around the world to try to tackle it, we look at what financial institutions need to focus on now. APP fraud and social engineering scams In recent years, there has been a significant surge in reported instances of Authorized Push Payment Fraud (APP). These crimes, also known as financial scams, wire fraud scams, or social engineering scams in different parts of the world, refer to a type of fraud where criminals trick victims into authorising a payment to an account controlled by the fraud perpetrator for what the victim believes to be genuine goods or services in return for their money. Because the transactions made by the victim are usually done using a real-time payment scheme, they are often irrevocable. Once the fraudster receives the funds, they are quickly transferred through a series of mule accounts and withdrawn, often abroad. Because APP fraud often involves social engineering, it employs some of the oldest tricks in the criminal's book. These scams include tactics such as applying pressure on victims to make quick decisions, or enticing them with too-good-to-be-true schemes and tempting opportunities to make a fortune. Unfortunately, these tricks are also some of the most successful ones, and criminals have used them to their advantage more than ever in recent times. On top of that, with the widespread adoption of real-time payments, victims have the ability to transfer funds quickly and easily, making it much easier for criminals to take advantage of the process. APP Fraud and social engineering scams - cases and losses across the globe: View map Impact of AI on APP fraud Recent advancements in generative artificial intelligence (Gen AI) have accelerated the process used by fraudsters in APP fraud. Criminals use apps like Chat GPT and Bard to create more persuasive messages, or bot functionality offered by Large Language Models (LLMs) to engage their victims into romance scams and the more sophisticated pig butchering scams. Other examples include the use of face swapping apps or audio and video deepfakes that help fraudsters impersonate someone known to their victims, or create a fictitious personality that they believe to be a real person. Additionally, deepfake videos of celebrities have also been commonly used to trick victims into making an authorised transaction and lose substantial amounts of money. Unfortunately, while some of these hoaxes were really difficult to pull off a few years ago, the widespread availability of easy-to-use Gen AI technology tools has resulted in an increased number of attacks. A lot of these scams can be traced back to social media, where the initial communication between the victim and criminal takes place. According to UK Finance, 78% of APP fraud started online during the second half of 2022, and this figure was similar for the first half of 2023 at 77%. Fraudsters also use social media to research their victims which makes these attacks highly personalised due to the availability of data about potential targets. Accessible information often includes facts related to family members, things of personal significance like hobbies or spending habits, information about favourite holiday destinations, political views, or random facts like favourite foods and drink. On top of that, criminals use social media to gather photos and videos of potential targets or their family members that can later be leveraged to generate convincing deepfake content that includes audio, video, or images. These things combined contribute to a new, highly personalised approach to scams than has never been seen before. What regulators are saying around the globe APP fraud mitigation is a complex task that requires collaboration by multiple entities. The UK is by far the most advanced jurisdiction in terms of measures taken to tackle these types of fraud to help protect consumers. Some of the most important legislative changes that the UK’s Payment Systems Regulator (PSR) has proposed or introduced so far include: Mandatory reimbursement of APP scams victims: A world first mandatory reimbursement model will be introduced in 2024 to replace the previous voluntary reimbursement code which has been operational since 2019. 50/50 liability split: All payment firms will be incentivised to take action, with both sending and receiving firms splitting the costs of reimbursement 50:50. Publication of APP scams performance data: The inaugural report was released in October, showing for the first time how well banks and other payment firms performed in tackling APP scams and how they treated those who fell victim. Enhanced information sharing: Improved intelligence-sharing between PSPs so they can improve scam prevention in real time is expected to be implemented in early 2024. Because many of the scams start on social media or in fake advertisements, banks in the UK have made calls for the large tech firms (for example, Google, Facebook) and telcos to be included in the scam reimbursement process. As a first step to offer more protection for customers, in December 2022, the UK Parliament introduced a new Online Safety Bill that intends to make social media companies more responsible for their users’ safety by removing illegal content from their platforms. In November 2023, a world-first agreement to tackle online fraud was reached between the UK government and some of the leading tech companies - Amazon, eBay, Facebook, Google, Instagram, LinkedIn, Match Group, Microsoft, Snapchat, TikTok, X (Twitter) and YouTube. The intended outcome is for people across the UK to be protected from online scams, fake adverts and romance fraud thanks to an increased security measures that include better verification procedures and removal of any fraudulent content from these platforms. Outside of the UK, approaches to protect customers from APP fraud and social engineering scams are present in a few other jurisdictions. In the Netherlands, banks reimburse victims of bank impersonation scams when these are reported to the police and the victim has not been ‘grossly negligent.’ In the US, some banks provide voluntary reimbursement in cases of bank impersonation scams. As of June 2023, payment app Zelle, owned by seven US banks, has started refunding victims of impersonation scams, thus addressing earlier calls for action related to reported scams on the platform. In the EU, with the newly proposed Payment Services Directive (PSD3), issuers will also be liable when a fraudster impersonates a bank’s employee to make the user authenticate the payment (subject to filling in a police report and the payer not acting with gross negligence). In October 2023, the Monetary Authority of Singapore (MAS) proposed a new Shared Responsibility Framework that assigns financial institutions and telcos relevant duties to mitigate phishing scams and calls for payouts to be made to affected scam victims where these duties are breached. While this new proposal only includes unauthorised payments, it is unique because it is the first such official proposal that includes telcos in the reimbursement process. Earlier this year, the National Anti-Scam Centre in Australia, announced the start of an investment scam fusion cell to combat investment scams. The fusion cell includes representatives from banks, telcos, and digital platforms in a coordinated effort to identify methods for disrupting investment scams to minimise scam losses. To add to that, in November 2023, Australian banks announced the introduction of confirmation-of-payee system that is expected to help reduce scams by ensuring customers can confirm they are transferring money to the person they intend to, similarly to what has been done in the UK a few years ago. Finally, over the past few months, more jurisdictions such as Australia, Brazil, the EU and Hong Kong, have announced either proposals or the roll out of fraud data sharing schemes between banks and financial institutions. While not all of these schemes are directly tied to social engineering scams, they could be seen as a first step to tackle scams together with other types of fraud. While many jurisdictions beyond the UK are still in the early stages of the legislative process to protect consumers from scams, there is an expectation that regulatory changes that prove to be successful in the UK could be adopted elsewhere. This should help introduce better tracking of the problem, to stimulate collaboration between financial insitutions, and add visibility of financial instituitions efforts to prevent these types of fraud. As more countries introduce new regulations and more financial institutions start monitoring their systems for scams occurrences, the industry should be able to achieve greater success in protecting consumers and mitigating APP fraud and social engineering scams. How financial institutions can prevent APP fraud Changing regulations have initiated the first liability shifts towards financial institutions when it comes to APP fraud, making fraud prevention measures a greater area of concern for many leaders in the industry. Now the responsibility is spreading across both the sending and receiving payment provider, they also need to improve monitoring for incoming payments. What’s more, as these types of fraud are a global phenomenon, financial institutions from multiple jurisdictions might consider taking greater fraud prevention steps early on (before regulators impose any mandatory rules) to keep their customers safe and their reputation high. Here are five ways businesses can keep customers safe, while retaining brand reputation: Advanced analytics – advanced data analytics capabilities to create a 360° of individuals and their behaviour across all connected current accounts. This supports more sophisticated and effective fraud risk analysis that goes beyond a single transaction. Combining it with a view of fraudulent behaviours beyond the payment institution's premises by adding the ability to ingest data from multiple sources and develop models at scale allows businesses to monitor new fraud patterns and evolving threats. Behavioural biometrics – used to provide insights on indicators such as active mobile phone calls, session length, segmented typing, hesitation, and displacement to detect if the sender is receiving instructions over the phone or if they show unusual behaviour during the time of the transaction. Transaction monitoring and anomaly detection – required to monitor sudden spikes in transaction activity that are unusual for the sender of the funds as well as mule account activity on the receiving bank’s end. Fraud data sharing capabilities – sharing of fraud data across multiple organisations can help identify and stop risky transactions early, in addition to mitigation of mule activity and fraudulent new accounts opening. Monitoring of newly opened accounts – used to detect fake accounts or newly opened mule accounts. By leveraging a combination of these capabilities, financial institutions will be better prepared to cope with new regulations and support their customers in APP fraud. Identity & Fraud Report 2023 US Identity & Fraud Report 2023 UK Defeating Fraud Report 2023 EMEA & APAC

Published: December 5, 2023 by Mihail Blagoev, Solution Strategy Analyst, Global Identity & Fraud

Subscribe to our blog

Enter your name and email for the latest updates.

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

Quadrant 2023 SPARK Matrix