How to Overcome Hyper-realistic Deepfakes in 2024

Defend against hyper-realistic deepfakes with AI-driven strategies for proactive cybersecurity in 2024.

January 17, 2024

Overcome Hyper-realistic Deepfakes

Carl Froggett, CIO at Deep Instinct, highlights the rising threat of hyper-realistic deepfakes in 2024 and guides organizations to adopt AI-driven strategies for proactive cybersecurity, emphasizing prevention over reaction.

Bad actors’ use of deepfakes isn’t new – but their effectiveness, fueled by advances in AI, will take these attack techniques to greater heights in 2024. By enacting more holistic end-to-end campaigns through AI and automation, they’ll leave traditional cybersecurity approaches to defend against deepfakes in the dust.

As artificial intelligence (AI) continues to become more sophisticated, attack techniques are following suit. Advancements in AI are already arming cybercriminalsOpens a new window with the means to carry out more complex and calculated attacks – especially realistic deepfakes – forcing the security industry to step up its game this year. However, to do so, leaders must understand how deepfakes are evolving, the risks they pose, and what steps they can take to protect themselves against threat actors using them. 

The Rise of Deepfakes

Deepfakes have been around for almost a decade but started to gain traction in 2017. Threat actors began using AI to manipulate the identities of celebrities and well-known figures to push false narratives, falsely promote goods, and attempt to discredit reputations. We’ve seen this with high-profile figures such as Kelly Clarkson and Mark Zuckerberg. In the case of Zuckerberg, threat actors used deepfake technology to manipulate videos and simulate his voice to say, “Whoever controls the data controls the truth,” causing backlash in the media and throughout Facebook. 

Deepfakes precisely mimic a target’s appearance and behavior – so it’s typically unrecognizable and undetectable. Today, amid the AI boom, deepfakes are becoming more realistic, simpler, and cheaper to produce, which is challenging for even the most sophisticated cyber professional to detect. 80% of leadersOpens a new window believe deepfakes pose a risk to their business, yet only 29% say they have taken steps to combat them.

The Advancement of Deepfakes and the Cyber Risks Ahead

AI has advanced to the point where threat actors can now use the technology to replicate someone’s voice (dialect and mannerisms) during a phone call and movements in a virtual meeting. This calls trust into question. If we can’t distinguish between real and AI-generated audio and visuals, how can anyone know what they’re watching or hearing is real? Deepfakes have also made it difficult for employees to spot a phishing email, which is how nearly half of all ransomware attacks begin. Previously, phishing emails were typically laced with grammatical errors and were easy to spot. But now threat actors are using tools like ChatGPT to craft better-written and grammatically correct emails in various languages that are difficult for spam filters and readers to catch. Combining this with the style and personality of the supposed sender, it is clear our current approaches will fail.

As cybercriminals become more sophisticated, they will use deepfakes to move beyond the endpoint and enact more holistic end-to-end campaigns backed by AI and automation – from reconnaissance to malware creation and delivery. With such comprehensive measures, threat actors can bypass existing security controls, evade detection once inside, and await the most opportune moment to attack. 

See More: AI: The Wakeup Call to Improve Open-Source Software Security

Revamping Security Approaches to Defend Against Deepfakes

The industry has entered a pivotal time. Organizations must fight AI with AI and emphasize preventative, proactive security. To defend against deepfakes, security teams should implement these three best practices within their organizations to ensure a strong security posture throughout 2024.

  • Security training evolution: Like existing approaches to preventing threats and malware have failed us this past year, security training and awareness programs won’t keep us safe from deepfakes. With the rise of deepfakes and other hyper-realistic phishing campaigns, employees remain the weakest link in an organization’s security strategy. As a result, traditional cybersecurity approaches to defend against these attacks, including information security training and awareness, will need to be revamped. Security teams need to ensure their employees are not falling victim to deepfakes, and the only way to do so is to remove the human element. This doesn’t mean removing employees altogether but rather supporting them with advanced technology and tools to prevent employees from falling victim in the first place.
  • Invest in advanced technologies (such as deep learning): The only way to truly fight AI is with deep learning (DL), the most advanced form of AI, as it can accurately determine real versus fake video and audio. A DL classifier can fulfill this requirement by inspecting the raw features of an image to detect “tell-tale” signs of fake images or videos and between authentic and artificial voices in audio, taking the initial investigative work out of the hands of the security team.
  • Go beyond traditional identity verification and authentication solutions: Biometric technology has become one of the most effective ID verification and identification methods. From facial and voice recognition to fingerprints, biometric solutions can leverage several personal and unique traits to identify an end-user accurately. With the uptick in deepfakes, these solutions are now constantly updated to help organizations identify and prevent attacks. For example, many biometric solutions can distinguish between authentic and artificial voices, leveraging factors imperceptible to the human ear. Organizations can stay one step ahead of bad actors by developing a multistep authentication approach and utilizing evolving biometric technologies. This may also include retiring existing solutions that can no longer be relied upon to have integrity, such as email or SMS.

The battle to defend against deepfakes is upon us. Only those who prioritize these strategies will be able to keep pace with threat actors and their ever-evolving attack methods powered by AI. The reactionary detect-and-respond days are behind us. Investing in a predictive, preventative security strategy is the best way to ensure your organization can combat sophisticated deepfakes. 

How can organizations prepare to combat the surge in hyper-realistic deepfakes? Let us know on FacebookOpens a new window , XOpens a new window , and LinkedInOpens a new window . We’d love to hear from you!

Image Source: Shutterstock

MORE ON DEEPFAKE

Carl Froggett
Carl Froggett

Chief Information Officer (CIO), Deep Instinct

Carl Froggett is Deep Instinct’s Chief Information Officer (CIO). He has a track record in building teams, system architectures, and large-scale enterprise software implementations and aligning processes and tools with business requirements. Froggett was formerly Head of Global Infrastructure Defense, CISO Cyber Security Services at Citi. In his role, Carl delivered integrated risk reduction capabilities and services aligned to the architectural, business, and CISO priorities across Citi’s devices and networks in 100+ countries. Since 1998, he’s held various regional and global roles, covering all aspects of architecture, engineering, global operations, and running critical enterprise cyber services for Citi’s cybersecurity functions.
Take me to Community
Do you still have questions? Head over to the Spiceworks Community to find answers.