I feel like there’s been a growing sense of alarm around biometrics lately—about anything related to it, without much distinction. In this climate of confusion, many people are coming to us, concerned about how all this noise might impact the future of their projects.
Recent guidelines and recommendations from the Spanish Data Protection Agency (AEPD), along with the world’s first AI law passed by the European Union, are adding to the confusion in an industry filled with diverse technologies, applications, and use cases. It’s important to avoid generalising and lumping all providers and technologies together.
That’s why today we want to shed some light on these controversies and try to debunk some myths, misunderstandings, and misperceptions around biometrics, as well as the widespread alarm surrounding it.
You can learn more about the security of biometric templates in this talk by our colleague, Ángela Barriga:
Myth 3: Biometrics are used for surveillance and control – We are unprotected!
I’m not here to paint a perfect world where this doesn’t happen. Unfortunately, some societies are veering dangerously close to George Orwell’s 1984. China is a prime example—an authoritarian state where human rights are often disregarded.
“With great power comes great responsibility” (Peter Parker)
In the European Union, indiscriminate video surveillance with real-time facial recognition is not permitted. In fact, under the new AI law, the use of biometrics for this purpose is categorised as “high risk”. This kind of surveillance is only allowed if individuals give explicit consent, such as at large events with heavy crowds.
However, there may be cases where high-risk security situations could justify its use by law enforcement. This opens up a complex debate, balancing the need for public safety with the right to privacy.
We always advocate for ethical and responsible implementation of these technologies.
In everyday use through our devices, biometrics are used solely for their intended purpose—logging into a platform, authorising a transaction, or paying with a facial scan in a shop. Additionally, companies providing these technologies are subject to rigorous audits to ensure proper data handling and privacy standards are met.
Myth 4: Facial biometrics are prone to errors due to physical changes or ageing
It’s true that physical changes, like ageing, weight loss, or injuries, can affect the accuracy of biometric systems. However, these systems are designed to be robust and adaptable to such changes.
Facial recognition algorithms are trained through AI and deep learning to recognise individuals across different stages of life, adjusting to gradual changes in appearance.
In cases where doubts arise, users can be given the option to update or re-register their biometric data periodically to maintain system accuracy, or use these updates to extend the range of their biometric templates.
Myth 5: Biometrics are difficult to use or require specialised training
One of the key factors behind the widespread adoption of biometrics in recent years has been its convenience and ease of use.
While some biometric systems may require a bit of familiarisation, they are designed to be intuitive and accessible to people of all ages and skill levels. This makes biometrics an inclusive technology.
It’s often easier to use your face or voice to authorise a process or transaction than relying on passwords or OTP codes, which are not only less secure but also create more friction in the process.
In other scenarios, like managing crowds at events or speeding up airport queues, biometric validation greatly improves the flow of people and makes the process smoother for everyone involved. These fast-track biometric access points are often referred to as fast check-ins or passenger flow systems.
Myth 6: Facial biometrics are biased by race or even gender
Modern facial recognition systems are based on neural network models trained on image databases to compare and match images and, in doing so, recognise individuals.
If these databases aren’t balanced in terms of representation by gender, race, or age, the system could show bias, with better accuracy for certain groups over others. This used to be a problem, particularly with individuals of Black or Asian ethnicity.
Thankfully, there are international organisations dedicated to evaluating biometric algorithms from different providers. The National Institute of Standards and Technology (NIST) is one such body that assesses biometric technologies for accuracy, performance, and security.
Their evaluations use fully balanced datasets across race, gender, and age groups.
Our facial recognition technology has been evaluated by NIST for verification and identification processes: FRVT 1:1 and FRVT 1.
These evaluations confirm our ability to accurately identify individuals across various genders, races, and age groups in different environments and situations.
Conclusion
Despite common myths and misconceptions, biometrics are a highly secure, non-invasive technology. Thanks to the way AI models are trained, biometrics respect individual rights and privacy through principles like confidentiality, unlinkability, and irreversibility.
However, there are organisations that, in the name of privacy protection and without a solid scientific or technological basis, demonise a technology and an industry that actually helps prevent fraud, allowing public and private companies to verify identities where it was once impossible: in the digital world.
Contact us if you want to use cutting-edge AI technology to prevent spoofing and verify your user with biometrics.
I’m a Software Engineer with a passion for Marketing, Communication, and helping companies expand internationally—areas I’m currently focused on as CMO at Mobbeel. I’m a mix of many things, some good, some not so much… perfectly imperfect.
GUIDE
Identify your users through their face
In this analogue-digital duality, one of the processes that remains essential for ensuring security is identity verification through facial recognition. The face, being the mirror of the soul, provides a unique defence against fraud, adding reliability to the identification process.