Select Page

Types of presentation attacks in facial biometrics

by | Identity Verification

The face, that seemingly familiar and intimate part of the body, has always been a reflection of our identity, a kind of interface between ourselves and the world around us. Nevertheless, the perception and treatment of the face have not remained constant throughout history.

David Le Breton, an anthropologist specialising in body studies, points out that the conception of the face has neither been uniform nor free of nuances. Its evolution is intimately linked to the human capacity to represent itself, which has been transformed as new tools and techniques have emerged to capture its essence.

The face as an ephemeral element

In pre-modern societies, the relationship to the face tended to be one of fugacity. People could not tangibly access their images. The face was perceived through the gaze of others, without the possibility of personal reflection. The invention of the mirror marked a turning point, offering a new form of introspection. Subsequently, photography provided a visual record and was able to transform the perception of the face into an object of personal and social analysis. With this new capacity for representation, the face acquired a symbolic charge that became intertwined with concepts of beauty, status and belonging.

Factions as physical connection and identity

During modernity, the features of the face became an object of scientific study. Anthropometry, developed in the 19th century, introduced methods to classify and measure facial features, seeking to establish connections between physical features and identity features. In this sense, photography was considered a visual art and a research tool that allowed individuals to be categorised into hierarchical classification systems based on their appearances.

Today, facial biometrics has completely changed the relationship between face and identity. It is no longer just about measuring features or generating a visual representation. Facial recognition is based on biometric data and algorithms that transcend the physical image into the digital realm. These systems can accurately identify users, where the face is no longer a visual or cultural representation, but a mathematical one.

Nevertheless, these systems are not free from manipulation. A new concern then arises that goes beyond the relationship with the face: what happens when our face, a symbol of our individuality and authenticity, can be manipulated or even falsified by an algorithm? How does facial biometrics address the issue of fraud, specifically presentation attacks?

Let’s answer these questions

What are facial presentation attacks?

Presentation attacks, also known as spoofing attacks, are deliberate attempts to deceive a biometric system by using a false representation of the authorised user’s face. These means can take the form of photographs, videos, or even three-dimensional representations designed to replicate the facial features of the targeted person.

To better understand this phenomenon, we could compare it to John Woo’s film Face/Off, where the protagonist undergoes facial surgery to adopt the identity of a criminal. Although his body is the same, his face becomes a façade, a deception that allows him to manipulate the perception of those around him. In a similar sense, in the field of biometrics, facial presentation attacks represent a kind of transformation where the face loses its direct connection to the individual’s identity and becomes a manipulable entity.

The face is no longer just a unique representation of the individual. It is more like a mask that can be replicated and used fraudulently through technology. High-resolution photographs, silicone masks and video recordings become artefacts that falsify the identity of the subject in biometric systems.

Faced with this problem, biometrics faces a vulnerability that can be mitigated with the use of advanced detection algorithms, but first let’s take a closer look at the different types of presentation attacks.

Types of presentation attacks considering the face as a manipulation instrument

Facial presentation attacks have become a significant challenge for biometric systems. These attacks, as already mentioned, are based on the ability to impersonate or hide identity through various techniques that fool facial recognition mechanisms. There are two broad categories of attacks: impostor attacks and concealment attacks.

  • Impostor attacks: In these attacks, the attacker seeks to be recognised as someone other than the person he or she really is. This type of attack can be divided into two subtypes: one in which the attacker seeks to impersonate a specific individual registered in the system and another in which the attacker simply seeks to be identified as any other person, no matter who. In both cases, the goal is to breach the biometric system to gain access to restricted areas or services reserved for the impersonated person.
  • Obfuscation attacks: These attacks aim to avoid being identified by the system. In this scenario, the attacker alters or disguises his face so as not to match any registered person, thus evading any form of identification. This type of attack usually occurs in situations where facial recognition is used in high-security access control systems.

It is important to understand that presentation attacks operate in the analog domain, meaning they take place outside the operating system, even though their aim is to breach the system’s algorithms. On the other hand, injection attacks, which are becoming more common, occur within the system. They involve intercepting and manipulating genuine biometric data using spoofing techniques. This is further heightened by the threat that deepfakes present.

Presentation Attack Instruments (PAI)

Presentation attack instruments are the elements or mechanisms used to trick a biometric system during the verification or identification process of an individual. Their purpose is to make the system accept an impostor as a legitimate user or to prevent the correct identification of the person in front of the system.

These instruments are mainly divided into two broad categories, which allow the concept to be approached both from the use of artificial elements and human characteristics:

Artificial instruments

These instruments are artificially created to mimic or replicate the biometric characteristics of the legitimate user. There are two sub-types:

  • Full instrument: this is a full recreation of the biometric characteristics of the target person. Examples of this type include:
    • Videos of a face that attempt to mimic natural interaction. When this type of instrument is used, it is often referred to as a visualisation attack.
    • Silicone masks, which have an enhanced elasticity capable of achieving a credible image of the legitimate person.
    • 2D printed masks, usually with holes that mimic the features of a user.
    • 3D face masks, which replicate a person’s face three-dimensionally.
    • 2D facial prints (photos), a primitive method used to fool simple facial recognition systems. It usually has a very low threat level.
  • Partial tool: This type of tool does not recreate the full face or biometric feature, but only a part of it. Examples include:
    • Facial videos where the face is partially covered, e.g. with sunglasses or with partially visible areas.
    • Altered photographs showing only portions of the face.

Human instruments

In this case, the instrument used in the attack is a human being, whose identity or biometric characteristics can be manipulated or used in various ways to deceive the system. These instruments can be classified into the following subcategories:

  • Lifeless: The use of lifeless body parts, such as parts of a corpse’s face, to replicate real facial features and fool the system.
  • Altered: Instruments based on the modification of natural facial features, such as the use of cosmetic surgery or facial mutations so that the system cannot detect drastic changes.
  • Non-compliant: This refers to the use of atypical or unnatural facial expressions, so that the system cannot correctly recognise the legitimate user or the impostor.
  • Coerced: This is where a person is used against their will, e.g. someone unconscious or under threat, to carry out the attack.
  • Compliant: This refers to the simple use of an impostor’s face, without the need for complex alterations, in an attempt to pass as the legitimate user without much effort.

What mechanisms does facial biometrics use to deal with this type of fraud?

One of the most effective approaches to ensure this authenticity is liveness detection. This technique seeks to validate that the person trying to access the system is physically present at the time of verification/authentication. This is crucial to prevent static images, pre-recorded videos or any other type of digital representation from being used to impersonate identities.

A central component of facial biometrics is the presentation attack detection module, known as PAD. This system analyses various characteristics and behaviours to differentiate between a genuine and an artificial face. Through advanced algorithms, the system is able to discern whether it is interacting with a real person or an impostor. In this context, deep convolutional neural networks (CNNs) play a key role in the fight against impersonation. CNNs are designed to examine visual images, learning already known attack patterns, especially in the case of temporal and spatial vectors. From these, aligned feature maps are generated that allow for accurate and effective identification of spoofing attempts.

Live detection can be implemented passively, which means that users do not need to perform additional actions, such as blinking or smiling, to prove that they are alive. This approach enhances the user experience, making it more seamless and minimises the possibility of frustrations that could lead to abandonment in the process.

The Security Through Obscurity (STO) Principle

On the other hand, the security through obscurity principle also plays an important role in facial biometrics.

This principle implies that the effectiveness of a security system lies in hiding certain details of its operation from potential attackers. In the case of facial biometrics, this principle plays a key role, as users and impostors do not have access to the exact way in which the biometric system operates. Therefore, not knowing the specific mechanisms of the system, such as the algorithms it uses to detect whether a face is real or an impersonation attempt, makes it difficult for attackers to employ reverse engineering techniques to circumvent it.

A user may not be aware that they are being evaluated by a facial recognition system in real time. By not knowing exactly when or how the system is verifying their authenticity, it becomes more complicated for these actors to prepare effective tactics to circumvent the system.

Furthermore, passive liveness detection, based on STO, takes advantage of this ‘obscurity’ by not requiring the user to perform the visible actions discussed above, making the process more natural and without obvious signals that the system is assessing its authenticity.

Reach out to implement facial biometrics with liveness detection capable of dealing with facial presentation attacks.

 

GUIDE

Identify your users through their face

In this analogue-digital duality, one of the processes that remains essential for ensuring security is identity verification through facial recognition. The face, being the mirror of the soul, provides a unique defence against fraud, adding reliability to the identification process.