Cyber Attacks from the Perspective of GDPR: Ransomware
Nowadays almost every business sector integrates digital technologies. IT infrastructure and practice, if not updated regularly, ages and becomes weaker. Therefore, […]
Biometric data is classified by the GDPR as a special category of personal data, subject to enhanced protection. This means processing biometric data is prohibited unless there is a valid legal basis for doing so.
In accordance with GDPR, biometric data can be processed under specific conditions, such as obtaining the explicit consent of the data subject or when processing is essential for substantial public interest.
GDPR’s jurisdiction extends globally, applying not only to entities based in the EU but also to those processing the data of EU residents, regardless of their location.
As such, organizations dealing with biometric identifiers must carefully define their legal grounds for processing and ensure compliance with GDPR requirements. This is crucial for both EU-based and non-EU entities handling biometric data.
Biometric data refers to personal data resulting from the specific technical processing of an individual’s physical, physiological, or behavioral characteristics. It can be used to identify or verify the unique identity of an individual.
Some common examples of special category biometric data include:
To comply with GDPR, organizations must determine if their use of biometric data fits within one of the legal grounds specified in the regulation. These grounds may include obtaining explicit consent from individuals or processing for purposes that fulfill public interests, such as law enforcement or national security.
The data subject must provide explicit, informed consent for the processing of their biometric data. This means they must fully understand what data is being collected, why, and how it will be used. Article 9(2)(a)
Biometric data may be processed if it is necessary for the fulfillment of a legal obligation or the performance of a task carried out in the public interest, such as law enforcement or national security. Article 9(2)(g) & Article 9(2)(f)
Processing can occur if it is necessary to protect someone’s vital interests, such as in emergency situations where biometric data is required to protect life. Article 9(2)(c)
Biometric data may be processed in the context of employment law, provided the processing is necessary for fulfilling obligations or exercising specific rights in the field of employment. Article 9(2)(b)

Processing is permitted when necessary for reasons of substantial public interest, which should be specified by law (e.g., health and safety concerns or public health management). Article 9(2)(g)
Biometric data may be processed when necessary for healthcare or social protection purposes. This includes medical research, health diagnosis, treatment, or the provision of healthcare services, which may reveal information about a person’s physical or mental health status. Article 9(2)(h)
Only the minimum necessary biometric data should be collected, and it should only be processed for specified, legitimate purposes that are clearly defined in advance. The data should not be used for any purposes other than those for which it was initially collected. Article 5(1)(c) & Article 5(1)(b)
Biometric data must be processed securely, ensuring measures like encryption, access controls, and pseudonymization are in place to protect the data from unauthorized access or breaches. Article 32
Organizations must be transparent with data subjects, providing clear information about how their biometric data will be used. This includes maintaining a record of processing activities and ensuring the rights of the data subjects are respected. Articles 12, 13, and 14
For high-risk processing (such as biometric data collection), organizations must conduct a Data Protection Impact Assessment (DPIA) to evaluate the risks and implement measures to mitigate those risks. Article 35
Data subjects have the right to object to the processing of their biometric data, especially when the processing is based on legitimate interests or public interest. Article 21
Failing to adhere to GDPR when handling biometric data can result in severe penalties and reputational damage. Recent cases show the high stakes involved:
The Spanish supermarket chain was fined €2.52 million for using facial recognition technology in its stores without proper consent, violating GDPR principles such as necessity and transparency.
A Swedish school was fined for using facial recognition to track attendance, as the processing did not meet the legal criteria under the GDPR. Parental consent was deemed invalid due to the imbalance of power between the institution and parents.
The French DPA imposed a €20 million fine on Clearview AI for collecting biometric data from over 20 billion online photos without consent. The company was ordered to cease its data collection and delete the collected information.
These examples emphasize the importance of transparency, necessity, and lawful consent when processing biometric data, underlining the necessity for organizations to carefully assess their practices.

Organizations processing biometric data must implement strict security measures to ensure compliance with GDPR.
One of the primary strategies for securing biometric data is encryption. Encryption should be applied to protect biometric data both during storage and transmission, ensuring that even if data is accessed by unauthorized parties, it remains unreadable.
For example, AES (Advanced Encryption Standard) can be used to encrypt data at rest, while TLS (Transport Layer Security) is ideal for encrypting data in transit. It’s also important that organizations securely manage encryption keys, allowing only authorized individuals to access them.
Access controls are another critical element in protecting biometric data. Access to sensitive biometric data should be strictly controlled and limited to authorized personnel only. This can be achieved by using role-based access controls (RBAC), where access is granted based on the user’s role and the need to know.
Additionally, multi-factor authentication (MFA) should be employed to secure access further. This requires users to provide multiple forms of identification, such as a password combined with a smart token or authentication via a smartphone.
Businesses should consider anonymizing biometric data whenever possible to further reduce the risks associated with data breaches. Anonymization removes identifying features from the data, ensuring that it can no longer be attributed to a specific individual without additional information.
For instance, organizations can store biometric data in the form of hashed templates rather than storing the actual facial or fingerprint data. This ensures that even if the data is accessed, it cannot be reversed back to identify the individual. Pseudonymization is another technique where biometric data is stored separately from personally identifiable information (PII), reducing the privacy risks.
GDPR’s data minimization principle is another critical consideration. This principle requires businesses only to collect and store the minimum amount of biometric data necessary for the specific purpose at hand.
Additionally, businesses should regularly evaluate whether biometric data is the only option for achieving a desired outcome or if there are less intrusive alternatives. For example, using ID cards, PIN codes, or RFID tags for authentication may be sufficient in some situations and pose fewer privacy risks than biometric data.
Organizations should also conduct regular security audits and monitoring of their data processing activities. These audits help identify vulnerabilities, ensure that access to biometric data is being properly managed, and confirm that the data is being processed according to GDPR guidelines.
By implementing these security measures—encryption, access controls, anonymization, data minimization, regular audits, and employee training—organizations can significantly reduce the risks associated with processing biometric data. These practices not only ensure compliance with GDPR but also help to build trust with data subjects by demonstrating a strong commitment to safeguarding their personal information.

As biometric data is increasingly utilized in AI and surveillance technologies, the GDPR plays a crucial role in regulating its use. The EU’s Artificial Intelligence (AI) regulations aim to ensure that AI systems involving biometric recognition adhere to GDPR principles.
The European Commission’s proposed AI regulatory framework classifies AI systems based on their potential risks, with higher risks requiring stricter compliance measures. AI systems for real-time biometric identification, such as facial recognition, may face heightened scrutiny, especially regarding transparency and the explicit consent of individuals.
As part of its digital strategy, the EU seeks to regulate artificial intelligence (AI) to foster responsible development and deployment of this technology. The regulation of AI systems is evolving, with different risk categories determining the level of regulatory scrutiny.
AI systems, particularly those using biometric data for identification or categorization, are classified as high-risk and will be subject to stringent assessments before being placed on the market. The EU’s AI Act has introduced new obligations for providers and users of AI systems that process biometric data, focusing on safety, transparency, and non-discrimination.
The new AI regulations introduced by the EU establish obligations for providers and users based on the level of risk associated with artificial intelligence systems. AI systems are assessed for risk, with minimal-risk systems requiring less regulation, while high-risk systems face stricter scrutiny and compliance measures.
AI systems considered to pose an unacceptable risk to individuals or society will be banned.
These systems include:
AI systems that pose high risks to safety or fundamental rights are categorized into two groups:
High-risk AI systems must undergo thorough assessments before entering the market and will be monitored throughout their lifecycle to ensure compliance.
Generative AI technologies, like ChatGPT, are subject to transparency requirements under the EU’s AI regulations. These include:
For AI systems that fall under the category of limited risk, there are minimal transparency requirements. These systems, which include AI technologies that manipulate audio, video, or image content (e.g., deepfakes), must ensure users are aware they are interacting with AI. Users should be given the choice to continue using the system after being informed of its nature.

In conclusion, the processing of biometric data under GDPR requires businesses to adhere to strict legal, technical, and organizational standards to ensure compliance with data protection law.
With the increasing integration of biometric data into AI and surveillance technologies, it’s more important than ever for businesses to have the right tools to manage and protect this sensitive information. By adopting best practices like encryption, anonymization, and data minimization, businesses can reduce the risks associated with biometric data processing.
To ensure seamless compliance and safeguard your organization’s data, consider using GDPR Register’s Compliance Software. It offers a comprehensive suite of tools designed to help you manage compliance, track your processing activities, and maintain data security, all while adhering to the latest GDPR and AI regulations.
Article Sources: