How to Safeguard Businesses Against AI-driven Fraud

Explore the proactive strategies to safeguard your business against emerging threats fuelled by AI-frauds in 2024.

March 11, 2024

safeguarding businesses against ai fraud

David Davitt, senior director of fraud prevention and experience at Veriff, outlines actionable strategies for businesses and leaders to fortify their defenses against the changing nature of AI-driven fraud.

We know identity fraud is on the rise. In fact, according to Veriff’s Identity Fraud Report 2024,  identity fraud went up by 20% in the past year aloneOpens a new window , and this will only continue. Identity theft is a core tactic used by fraud actors which often includes the use of fake IDs to impersonate people online after phishing for their identifying information. Victims of these scams can face devastating outcomes, including a complete upending of their personal lives, reputational harm, or even financial ruin.

Artificial intelligence (AI) has made identity fraud even easier. Fraud attempts are getting more refined as tools fueled by AI are making fraud activity more accessible to even less sophisticated bad actors. For example, AI-generated deepfakes make it easy for almost anyone to create impersonations or synthetic identities, whether of celebrities or someone they know.

Soon, we’ll see the number of account takeovers using deepfakes rise as biometrics adoption for authentication increases. Let’s look at what we can expect from AIs’ influence on the fraud industry in the new year and how we need to change our approach to fend off these bad actors.

State of AI-fraud

Fraud is not a new concept, but the rise of new and easy-to-access methods to conduct successful fraud attempts has resulted from the nearly instant pervasiveness of AI. A recent survey found that impersonation fraud became the most common type of identity fraud in 2023, as generative AI and deepfakes-as-a-service platforms became readily accessible, enabling bad actors to manipulate and steal someone’s identity. It’s not surprising, then, that impersonations made up 85% of fraud.

The most common method for creating deepfakes utilizes a subset of AI and machine learning (ML). Layered neural networks extract more precise details from the data they’re fed, whether an image, audio or video clip or even a text sample, to create a near-perfect replica of a person or document. 

Many have already encountered deepfakes in their daily life without even being aware. The average person will regularly encounter video deepfakes, which could be doctored clips of a victim, typically a celebrity, appearing to do or say something they didn’t. These deepfakes are often created by superimposing the celebrity’s features over someone else or by “puppeteering” their image. 

While image-based deepfakes are the most abundant with AI learning and recreating the intricacies of a human face, they can also take a voice and produce an accurate duplication, whether through text-to-speech or even overlaid over a bad actor’s voice. This vocal fraud can then be used to masquerade as a victim over the phone, making the recipient believe they’re speaking to someone familiar rather than a scammer, potentially leading to the disclosure of personally identifiable information, bank account passwords, and more.

As tools like AI become increasingly easier and more cost-effective to access and implement, we will see more impersonation and identity fraud-type attacks. We’ll also see these attacks pushed on the masses as groups of bad actors can use libraries of stored deepfakes and acquired identities to deploy at scale rapidly. Though the threat and sophistication of deepfakes may seem scary, cybersecurity experts can now use the same AI tools to combat bad actors themselves.

See More: Effective AI Cybersecurity in 2024: Cross-Collaboration and Proactivity

Outlook and Strategies for 2024

Digitization has sped up bad actors’ ability to commit fraud. So, while individuals can now validate their identities online in seconds, bad actors are using those same digitization tools to commit fraud. 

A finance worker recently paid fraudsters $25 million when they posed as the employee’s CFO. This tech can even be applied to documents with bad actors issuing fake IDs in minutes. The sophistication of AI-driven impersonation has improved significantly, even over a few months, leading to more incredibly sophisticated attacks across ID methods – from biometrics to verifiable documents. 

Combining this technology with mass attacks – where bad actors use bots to attack targets at scale with dark web-purchased credentials from data breaches – will initiate a new wave of automated AI-fueled attacks.

As quickly as AI opens up new potential fraud attacks, it can also be a powerful asset in the battle of AI vs AI. Fraud teams must assess threats regularly to stay ahead of attacks and be agile enough to pivot in real-time. Relying on legacy approaches to fraud prevention is no longer enough. These approaches can expose an organization to bad actors. For instance, passwords are vulnerable to data breaches and malware, and two-factor authentication is susceptible to device compromise and social engineering

Teams need to take a layered approach to prevent fraud to protect their data and customers better. Using multiple technologies like examining device signatures, utilizing advanced liveness testing, improving authentication, and incorporating preventative AI is important.

AI can be used against AI threats in several identity protection use cases – both for the user and a business. User identity – which relies on determining the legitimacy of an ID document – or determining if a person on the phone is who they say they are, can all be confirmed with AI-based verification tools that are far more accurate than the human eye. 

For example, AI-powered liveness detection technology can combine facial recognition with sophisticated algorithms that determine if a person is physically present and alive during capture. This technology is highly efficient, performing what would amount to hundreds of forensic checks in just seconds.

An effective fraud defense must be strong, dynamic, and multi-faceted. One essential tool is a strong identity verification solution. Teams need strategic tools that can be dynamically adapted to enhance and support your strategies to defend against fraud trends as they evolve. 

AI-driven Crime in Years Ahead

Fraud, aided by AI, will impact almost every industry, expanding across e-commerce, payments, and video gaming platforms, as bad actors use AI to make realistic attempts at gaining personal information. 

Investing in partnerships with quality industry leaders across the fraud prevention space is important. Doing the work now to implement the newest technology and examining where the right amount of friction can be added to your user journey – at the right time – will be crucial to stopping bad actors without impacting the legitimate day-to-day business and customer flows.

Although the threat of AI is very real, bad actors are not the only ones benefitting from the technology. AI can and should be used to defend against those same threats. Deepfakes are only going to get more convincing – leaders need to pay close attention to the changing threat landscape and be ready to adjust strategies quickly. 

How are you safeguarding your business and protecting your customers against AI tricksters? Let us know on FacebookOpens a new window , XOpens a new window , and LinkedInOpens a new window . We’d love to hear from you!

Image Source: Shutterstock

MORE ON AI FRAUD

David Divitt
David Divitt

Senior Director of Fraud Prevention & Experience, Veriff

David Divitt is Senior Director of Fraud Prevention & Experience at Veriff, a global identity verification provider. With more than two decades of experience working with major financial institutions, payment providers, and software vendors to help develop their fraud prevention strategies, Divitt supports the production and development of Veriff’s identity verification solutions to meet the need for modern and innovative fraud and financial crime prevention technology, while keeping the user experience seamless. Divitt was most recently Vice President of Financial Crime Products at Vocalink, a Mastercard company. Prior to Mastercard, Divitt was Product Manager of Financial Products at Alaric, an NCR company that offers global fraud prevention and intelligent transaction handling solutions. He has provided professional consultation to over 50 of the top global banks and helped design and structure fraud solutions and operations in multiple tier-one financial institutions.
Take me to Community
Do you still have questions? Head over to the Spiceworks Community to find answers.