How To Ethically Navigate AI And Biometrics in 2024?

Navigating risks, regulations crucial for responsible, secure deployment of biometric tech and AI advancements.

January 9, 2024

biometrics AI

Biometrics evolved explosively with AI for user verification. Sabrina Gross, Veridas’s Regional Director, addresses concerns prompting regulatory milestones. In 2024, she emphasizes the need for heightened AI precision and accountability.

Biometric technology has rapidly evolved over the past few years, with new authentication techniques and innovations being introduced every year. However, 2023 has been a prolific year for biometrics, given the rapid development of AI systems, which have facilitated faster and more accurate user verification processes. 

This rapid development and adoption of AI this year has also created significant concerns about the potential pitfalls of these AI-driven biometric systems. AI is a powerful tool that can be dangerous if placed in the wrong hands, so the last year has also seen essential steps towards AI regulation. Notable landmarks include the EU’s AI Act, the UK AI Summit and Biden’s Executive Order on AI. 

Looking ahead to 2024, the focus on the accuracy and reliability of AI will intensify, particularly in scenarios where these systems grapple with limited information or ambiguous instructions. When a chatbot outputs false or nonsensical information, AI hallucinations will certainly be a focus in the upcoming year as stringent measures are developed. 

Moreover, new safeguards will be introduced to oversee AI decision-making. This shift underscores a growing commitment to ensuring AI operates effectively, ethically and responsibly.

What is AI Hallucination?

The term AI hallucination draws on human psychology to describe when AI models generate false or illogical outputs but are presented as true. The fluency of AI-generated text can mask these inaccuracies, making them seem credible.

The root causes of AI hallucinations are:

  • Low-quality training data
  • Insufficient user context
  • Programming flaws hindering correct information interpretation

These issues are particularly prevalent in AI text generators and image recognition systems, where large language models (LLMs) are employed. LLMs, such as those used in ChatGPT and Bard, are designed to process language in a human-like manner.

Large language models (LLMs) like ChatGPT and Bard process language in a human-like manner

However, their fluency and coherence do not equate to understanding the real world. They predict the next word based on probability, not factual accuracy. There are various levels of AI hallucination. While some can lead to factual inaccuracies, others include misinformation or generating fabricated data.

For instance, AI has mistakenly created misleading narratives about real people, like wrongly implicating an Australian mayorOpens a new window in a bribery case, blurring the lines between fact and fiction. 

Moreover, AI text generators created fictitious content, such as non-existent URLs and false legal citations, to fulfil user queries. In one notable case, a New York attorney was sanctionedOpens a new window for using ChatGPT’s generated legal arguments, which included invented precedents. 

These examples highlight the potential risks and impacts of AI hallucinations. Addressing these challenges is crucial for AI technologies’ responsible and ethical development.

Hence, governments worldwide have discussed regulations to set hard boundaries on the usage of machine learning and restrict AI hallucination. More countries are now working on versions of AI Acts that best fit the needs of their nation. 

See More: Biometrics: Why Are They Needed and Top Practical Applications

The Challenge Of Deepfakes

As deepfake technology has become more convincing, adversaries have explored more ways to misuses it. Hackers use applications coupled with Generative Adversarial Networks (GANs) to refine deepfakes, making them increasingly realistic.

In biometric identity authentication systems, deepfakes can be used for presentation attacks and injection attacks. Presentation attacks involve presenting manipulated biometric data—like deepfake videos, images or synthesized voices—to fool the system into recognizing an unauthorized user. These attacks can target various biometrics, including face, voice and fingerprints.

Injection attacks are more insidious, where deepfakes are inserted into a system’s database or training dataset, potentially skewing the system’s performance and understanding of genuine biometric data. This could lead to legitimate users being rejected while attackers gain access using artificial biometrics.

The use of deepfake technology is predicted to rise sharply in 2024, particularly on social media and in financial crimes like phone fraud. Organizations are expected to implement robust measures to combat deepfakes and prevent fraudulent activity. Legislation requiring the labeling and tagging of deepfake content is anticipated, with significant penalties for non-compliance akin to GDPR regulations.

Regulating deepfakes poses challenges due to the complexity of governance and varying responsibilities across jurisdictions. This evolving landscape underscores the need for vigilant, adaptive strategies to counter the sophisticated and potentially damaging use of deepfake technology.

See more: Top 10 AI Development and Implementation Challenges

What is the Role Of Transparency And Trust In AI Deployment

The various AI Acts are set to influence the future landscape of AI and biometrics significantly. Their focus on safety, transparency and user consent is expected to establish new industry benchmarks, cultivating an environment of trust and accountability. 

When organizations certify or evaluate these technologies, they instill confidence, which is advantageous and crucial for the effective implementation and progression of AI and biometric systems. 

Additionally, these acts introduce heightened responsibility for companies, ensuring that products entering the market are accurate and safeguard individual rights. With the EU recently reaching an agreement, there will be a new focus on holding companies liable for violations and non-compliance and will expect to see some companies face hefty fines associated with the AI Act. This shift towards more rigorous oversight promises a more secure and ethical framework for using and developing AI and biometrics. 

The increasing integration of AI into various aspects of our daily lives brings a new era of technological advancement. This journey towards a modern future is not just about innovation; it’s also about creating a framework where technology operates within the bounds of safety, ethics, and respect for individual rights, leading to a more secure and responsible digital world.

What changes and developments do you want to see in the field of AI in 2024? Let us know on FacebookOpens a new window , XOpens a new window , and LinkedIn.Opens a new window   We’d love to hear from you!

Image Source: Shutterstock

MORE ON BIOMETRICS

Sabrina Gross
Sabrina Gross

Regional Director of Strategic Partnerships, Veridas

For the past 5 years, Sabrina has worked with global banks, telcos, and insurance companies, heading up customer success teams to streamline and supercharge their engagement.  Sabrina's background is in investigative systems, where she spent 15 years working with law enforcement agencies around EMEA which have given her the advantage of understanding the risk of fraud and balancing it with the customer experience. At Veridas, Sabrina focuses on cutting-edge technologies like biometrics that are used in preventing identity fraud.
Take me to Community
Do you still have questions? Head over to the Spiceworks Community to find answers.