Deepfakes: Can Biometric Authentication Defeat the New Cybersecurity Nightmare?

Though deepfake technology is still in nascent stages, it could pose immense cybersecurity challenges to organizations as it matures. Let’s hear from experts how biometrics can help organizations tackle the threat.

Last Updated: September 2, 2022

Remember how in April, fraudsters pretended to be a law firm using AI-generated pictures to trick consumers and businesses? A similar incident happened last week when attackers created a deepfake hologram of a senior representative of the cryptocurrency trading business Binance. Patrick Hillmann, the chief communication officer at Binance, disclosed that a “sophisticated” group of hackers utilized video footage of his prior interviews and media appearances and digitally altered it to create his AI hologram. Read more about the incident in Hillman’s blog hereOpens a new window .

Such deepfake frauds are set to become an immense threat to organizations and individuals worldwide. Even in live online job interviews, scammers are starting to impersonate identities using deepfake video technology. But are businesses equipped to tackle the growing threat? 

In this article, we bring to you in-depth analyses from security experts on the threat posed by deepfakes, the best approaches to spot the gaps, and how biometric technology could emerge as the knight in shining armor to deal with such identity attacks. 

See More: What Is Deepfake? Meaning, Types of Frauds, Examples, and Prevention Best Practices for 2022

Deepfakes – Meaning, Prevention Best Practices, and Role of Biometrics as a Security Measure

How do deepfakes work?

In an increasingly remote and hybrid working world, deepfakes offer a sophisticated means for criminals to create fake identities and apply for remote job roles, often bolstered with other stolen Personally Identifiable Information (PII), comments Ajay Amlani, SVP, head of Americas, iProov. 

“It’s not just a paycheck that criminals are chasing when applying for jobs fraudulently. If a criminal can succeed in getting hired to a role that has access to personal or other data, they can steal that data or hold it for ransom for financial gain.”

– Ajay Amlani, SVP, head of Americas, iProov, iProov

ZeroFox’s VP of intelligence strategy and advisory, Brian Kime, explains why cybercriminals use deepfakes to apply for jobs. “Using deepfake technology to get a job with a target is likely a sign that the targeted company has a really good cybersecurity program, and other attempts to breach them have failed.” The criminals want the company’s data so bad they turn to deepfake technology to try to get themselves hired by the target, he adds. But the criminals likely aren’t native English speakers. 

Therefore, they “turn to technology” to make them “sound and look” like a local.

Mike Loukides, VP of content strategy for O’Reilly Media, says, I was puzzled by this at first. How does a deepfake help you apply for a job? What’s the advantage? You still have to go through an interview, and even if you have a good enough fake to generate head and mouth motions in real-time, you still have to answer the questions.”

“That’s really the key. The person applying for the job with the deepfake still has to answer the questions that you’d normally have in an interview. So they still have to demonstrate technical competence. This is not about landing jobs for which you’re unqualified.” 

What it’s about, then, is landing jobs with a fake ID: a job that presumably can’t be traced back to you, Loukides says. He describes the modus operandi of cybercriminals and how they are misusing technology to get a job and achieve ulterior motives. “Now it’s clear what’s happening. Once you’re hired, you’re working remotely for an organization and have access to resources including code, data, finances, etc. You’re in a great position to steal that data: information about customers, intellectual property, plans and strategies, and a lot more.” 

“If you’re at the right company, there will be plenty of criminal organizations willing to pay for that data. And if you’ve created your fake ID carefully, it will be very difficult for the employer to trace any information theft back to you.”

– Mike Loukides, VP of content strategy for O’Reilly Media

Are video interviews a poor way to spot deepfakes?

It’s worrisome that consumers think they could spot a deepfake easily, says Amlani. “About 57% felt confident they could tell the difference, according to our recent surveyOpens a new window . But the human eye is easily spoofed – especially as deepfakes become more convincing – so we are both poorly equipped to detect deepfakes and too self-assured to be expecting them.”  

In 2020, the Idiap Research Institute tested subjects with a series of simple to super-sophisticated deepfakes. The experiment revealed just “24% of subjects detected deepfake videos, while only 71% of subjects correctly identified the deepfakes in the easy category.” And a caveat – this is when the subjects were expecting to see deepfakes, he adds.

“Your chances would be far lower if you were caught off guard during a potentially tense recruitment procedure,” says Amlani.

Roger Grimes, a data-driven defense evangelist at cybersecurity firm KnowBe4, believes that video interviews are a less effective method of detecting deepfakes. “This is because of many readily recognized characteristics of today’s deepfakes. For example, jerky video, spotty speech, etc. could be misinterpreted as a video error or bandwidth issue.” The deepfake candidate can lie and say they are having bandwidth problems, which could explain the artifact errors and even slow or non-response to specific questions.

Loukides thinks people are predisposed to trust what they see and might not be as alert as they should be when conducting an interview. But from what I have read and seen regarding deepfake interviews, the “tells” are usually rather easy to see, such as an out-of-sync lip and voice. “A voice that doesn’t sound right because it’s been altered. Fortunately, this has been the problem with deepfakes all along. They look impressive, but if you’re critical, it’s usually easy to find something that’s wrong.” 

Having said that, deepfake technology will obviously get better, as will technology for detecting fakes, he adds. 

“I would strongly recommend going beyond the video itself. Someone seriously involved with digital espionage once told me that a person who doesn’t have a digital shadow probably doesn’t exist.” 

“Competent cybercriminals are going to create digital shadows for their deepfake identities. But you need to make it hard for them: check references, check previous employers, check projects on GitHub, check social media, etc.” 

– Mike Loukides, VP of content strategy for O’Reilly Media

“If any of these come up empty, or if they don’t fit with each other, you may be dealing with a person who doesn’t exist.” 

See More: Remote Job Applications Using Deepfakes on the Rise: FBI

Is biometrics the most effective anti-deepfake security measure available right now?

The experts agree that deepfake is still at the nascent stage, but with time, even the attackers will get smarter as technology advances. Facial biometric technology with liveness can be used to verify that someone is an actual person and not a presentation attack – i.e., tell when a video or photo is being presented. But as we have seen, technology is now at a point where attackers can inject the deepfake directly into the video stream, bypassing some liveness checks. Amlani believes that to mitigate this, solutions that can verify the trickiest part that a user is present in real-time are vital. 

Kime states that biometrics, especially facial recognition technology, can help solve the issue of fake job candidates. However, not many devices ship with facial recognition technology. Requiring a candidate to have a specific piece of technology on hand just to interview isn’t feasible today. “Collecting biometric information from job candidates would also likely scare some legitimate candidates away and also increase compliance requirements for companies.” 

For remote employees, biometrics is not that good as its attributes can be stolen and more easily reused by attackers and fraudsters in remote work situations. This risk can be decreased by requiring in-person, human interaction for biometric trait recording and identity proofing by a trusted organization with experience in such things. 

– Roger Grimes, data-driven defense evangelist, KnowBe4

Loukides explains that when someone is applying for a job, the company doesn’t have any authentication credentials, let alone biometric credentials. “And I’d also assume that once you hire someone, they can create or obtain fake credentials like fingerprints, retinal scans, or whatever. Biometric data, just like passwords, ultimately come from the person who is authenticating themselves. I don’t think biometrics as such plays a big role here. Once you let someone in the front door, they’re inside.” 

Best practices to protect against sophisticated AI-generated deepfakes

The deepfake threat is very real – in fact, in June 2022, the FBI even issued a public service announcementOpens a new window to alert organizations of the threat of deepfake employees. Organizations shouldn’t be complacent. Successful onboarding of a criminal posing as an employee could cause far-reaching, damaging cyberattacks, data theft and reputational damage that’s far harder to recoup. The experts suggest the following practices to be safe from deepfake attacks:

  • Paul Bischoff, privacy advocate at Comparitech, suggests that organizations can implement liveness detection to identify deepfakes. This camera-enabled technology looks for signs of a real human or lack thereof, such as movement in the eyes and skin texture. The user might be asked to carry out simple instructions so the camera can get a 3D representation of their face. It’s sort of like a CAPTCHA but for deepfakes instead of bots.
  • Amlani thinks that to combat deepfakes, organizations must ascertain whether the interviewee is the right person, a real person and authenticating right now. Face biometric technology, used to verify applicants against an ID document upon application and before a live interview, is the most effective way an organization can feel confident the applicant is genuine and present in real-time. 
  • Grimes believes education is key. Just being aware that they happen is step 1 and one of the best ways to prevent them. Second, the interviewer can try to ask questions that the deepfake candidate cannot easily or readily answer. Third, all potential employees should have an extensive background investigation by a trusted investigative organization with experience in employee background investigations where the supposed employee is located. This latter defense is a great way not to get easily fooled. Try always to have someone with extensive knowledge and familiarity with the purported employee’s home and/or current country, as they will be far more aware of the otherwise potential subtitles that a non-native speaker might miss. 
  • Kime believes that HR professionals and technology firms need to begin thinking about deepfake detection in the next few years. Today, HR teams need to work closely with hiring managers to prep interviewers to detect inauthentic candidates. For example, the candidate claims to be in your time zone, but there is a noticeable lack of daylight from their window.

Is your company equipped to spot deepfake manipulation? Let us know on LinkedInOpens a new window , Facebook,Opens a new window and TwitterOpens a new window . We would love to hear from you!

MORE ON DEEPFAKES

Ojasvi Nath
Ojasvi Nath

Assistant Editor, Spiceworks Ziff Davis

Ojasvi Nath is Assistant Editor for Toolbox and covers varied aspects of technology. With a demonstrated history of working as a business writer, she has now switched her interest to technology and handles a broad range of topics from cybersecurity, cloud, AI, emerging tech innovation to hardware. Being a philomath, Ojasvi thinks knowledge is like a Pierian spring. The more you dive in, the more you learn. You can reach out to her at ojasvi.nath@swzd.com
Take me to Community
Do you still have questions? Head over to the Spiceworks Community to find answers.