Select Page

Myths, mistakes and misperceptions of biometrics

by | Identity Verification

I feel like there’s been a growing sense of alarm around biometrics lately—about anything related to it, without much distinction. In this climate of confusion, many people are coming to us, concerned about how all this noise might impact the future of their projects.

Recent guidelines and recommendations from the Spanish Data Protection Agency (AEPD), along with the world’s first AI law passed by the European Union, are adding to the confusion in an industry filled with diverse technologies, applications, and use cases. It’s important to avoid generalising and lumping all providers and technologies together.

That’s why today we want to shed some light on these controversies and try to debunk some myths, misunderstandings, and misperceptions around biometrics, as well as the widespread alarm surrounding it. 

 

Myths about Biometrics

Modern biometric technologies are governed by strict regulations designed to protect people’s privacy and prevent misuse. These systems don’t store raw data about our physical features; instead, they process and encrypt that data in a highly secure way. In fact, they can achieve a level of accuracy that often surpasses human ability when it comes to recognising and verifying individuals.

Biometrics has huge potential, making identity verification easier and more accessible across a wide range of processes. With that in mind, let’s look at some of the common myths or misconceptions—many of which have persisted for years, even as biometric technologies have evolved through advances in artificial intelligence.

 

Myth 1: Biometrics violate personal privacy

Biometrics are neither invasive nor a violation of privacy.

We can’t stress this enough: a technology isn’t inherently good or bad—it all depends on how it’s used.

In fact, the world’s first AI law classifies and categorises AI systems based on risk, with a particular focus on how the technology is applied, rather than the technology itself.

For example, remote biometric identification (e.g. facial recognition for video surveillance) is classified as high-risk, particularly if it involves passive monitoring where users aren’t aware or haven’t given consent. On the other hand, biometric applications where users are actively involved, informed, and have given consent, are considered low-risk.

Same technology, different uses, different levels of risk.

Moreover, while biometrics do involve collecting and analysing unique data such as facial features, this doesn’t necessarily mean a breach of privacy. Most systems store non-reversible biometric templates rather than raw images, meaning it’s impossible to recreate the original image from the stored data.

Additionally, privacy and security practices like data encryption and compliance with regulations such as GDPR ensure individuals’ privacy is protected. 

 

Myth 2: If a hacker steals my biometric data, they’ll have access to my face

If a criminal were to steal your biometric data, all they’d get is an encrypted biometric template, which can’t be reverse-engineered into a full image of your face (in the case of facial recognition).

A biometric template is a numerical representation (a feature vector) of the biometric data that’s stored for future comparisons. These templates are secure due to the complexity of the mathematical operations involved in generating them, typically through deep learning networks.

Biometrics are also far more resilient to attacks than other authentication methods, like passwords.

Modern biometric systems use advanced encryption techniques, data protection protocols, and other security measures that make it much more difficult for hackers to compromise the system.

pin

It’s much easier to carry out an identity theft attack using other methods, like stealing a person’s photo from their social media profiles, than it is to reconstruct a face from a biometric template. The complex encryption of biometric data makes reverse-engineering a face practically impossible, while photos publicly available online are much more vulnerable to being misused for identity fraud.

You can learn more about the security of biometric templates in this talk by our colleague, Ángela Barriga:

Myth 3: Biometrics are used for surveillance and control – We are unprotected!

I’m not here to paint a perfect world where this doesn’t happen. Unfortunately, some societies are veering dangerously close to George Orwell’s 1984. China is a prime example—an authoritarian state where human rights are often disregarded.

 

“With great power comes great responsibility” (Peter Parker)

 

In the European Union, indiscriminate video surveillance with real-time facial recognition is not permitted. In fact, under the new AI law, the use of biometrics for this purpose is categorised as “high risk”. This kind of surveillance is only allowed if individuals give explicit consent, such as at large events with heavy crowds.

However, there may be cases where high-risk security situations could justify its use by law enforcement. This opens up a complex debate, balancing the need for public safety with the right to privacy.

We always advocate for ethical and responsible implementation of these technologies.

In everyday use through our devices, biometrics are used solely for their intended purpose—logging into a platform, authorising a transaction, or paying with a facial scan in a shop. Additionally, companies providing these technologies are subject to rigorous audits to ensure proper data handling and privacy standards are met.

 

Myth 4: Facial biometrics are prone to errors due to physical changes or ageing

It’s true that physical changes, like ageing, weight loss, or injuries, can affect the accuracy of biometric systems. However, these systems are designed to be robust and adaptable to such changes.

Facial recognition algorithms are trained through AI and deep learning to recognise individuals across different stages of life, adjusting to gradual changes in appearance.

In cases where doubts arise, users can be given the option to update or re-register their biometric data periodically to maintain system accuracy, or use these updates to extend the range of their biometric templates.

 

Myth 5: Biometrics are difficult to use or require specialised training

One of the key factors behind the widespread adoption of biometrics in recent years has been its convenience and ease of use.

While some biometric systems may require a bit of familiarisation, they are designed to be intuitive and accessible to people of all ages and skill levels. This makes biometrics an inclusive technology.

It’s often easier to use your face or voice to authorise a process or transaction than relying on passwords or OTP codes, which are not only less secure but also create more friction in the process.

In other scenarios, like managing crowds at events or speeding up airport queues, biometric validation greatly improves the flow of people and makes the process smoother for everyone involved. These fast-track biometric access points are often referred to as fast check-ins or passenger flow systems.

 

Myth 6: Facial biometrics are biased by race or even gender

Modern facial recognition systems are based on neural network models trained on image databases to compare and match images and, in doing so, recognise individuals.

If these databases aren’t balanced in terms of representation by gender, race, or age, the system could show bias, with better accuracy for certain groups over others. This used to be a problem, particularly with individuals of Black or Asian ethnicity.

Thankfully, there are international organisations dedicated to evaluating biometric algorithms from different providers. The National Institute of Standards and Technology (NIST) is one such body that assesses biometric technologies for accuracy, performance, and security.

Their evaluations use fully balanced datasets across race, gender, and age groups.

Our facial recognition technology has been evaluated by NIST for verification and identification processes: FRVT 1:1 and FRVT 1.

These evaluations confirm our ability to accurately identify individuals across various genders, races, and age groups in different environments and situations.

 

Conclusion

Despite common myths and misconceptions, biometrics are a highly secure, non-invasive technology. Thanks to the way AI models are trained, biometrics respect individual rights and privacy through principles like confidentiality, unlinkability, and irreversibility.

However, there are organisations that, in the name of privacy protection and without a solid scientific or technological basis, demonise a technology and an industry that actually helps prevent fraud, allowing public and private companies to verify identities where it was once impossible: in the digital world.

 

Contact us if you want to use cutting-edge AI technology to prevent spoofing and verify your user with biometrics.

GUIDE

Identify your users through their face

In this analogue-digital duality, one of the processes that remains essential for ensuring security is identity verification through facial recognition. The face, being the mirror of the soul, provides a unique defence against fraud, adding reliability to the identification process.