The technology team at William Fry explains how the use of behavioural biometrics can raise privacy issues under new GDPR rules.
The way a user interacts with their phone screen, keyboard or mouse contains identifying features that can be as accurate as fingerprint mapping, iris scans or facial reading. By using screen sensors in devices and identification algorithms on websites, companies can track thousands of data points from analysing the angle at which people hold their devices, which fingers people use to scroll, the pressure applied, the rhythm of keystrokes and cursor movement.
All of this data can build a unique digital profile assigned to a particular user in a method known as behavioural biometrics. This profile is then measured against every new interaction, analysing thousands of elements to calculate a probability-based determination of whether the user is who they say they are when they log in to an online account, device or application.
Biometrics and GDPR
With cyber-theft and data breaches becoming an increasingly common occurrence, the need to rapidly and accurately identify fraud has driven the development of behavioural biometrics as a means to target automated attacks and suspicious transactions. However, for the technology to function, companies must amass libraries of biometric personal data to construct profiles based on how users touch, hold and tap their devices.
Given the scale and quantity of personal data potentially being collected, as well as the multitude of potential purposes outside of fraud detection, concerns are emerging that behavioural biometrics may not be compatible with European Union privacy rules, in particular the legal bases under Article 9 of the General Data Protection Regulation (GDPR).
Many companies, from start-ups to large tech multinationals, have begun to build behavioural biometrics into their security software as the technology is increasingly recognised in the cybersecurity market as a powerful safeguard. Given the use in combating fraud, most companies are reticent about revealing the details of their biometric processing outside of the mandatory information required in privacy notices.
Striking balance between security and privacy
When the GDPR came into force in May 2018, it introduced new rules around biometric data, recognising it as a “special category of personal data” that requires both a special legal basis under Article 9 and an accompanying data privacy impact assessment to be carried out (which requires an identification of the privacy concerns and the measures that must be implemented to mitigate risk).
Furthermore, the GDPR contains a very broad definition of biometric data and allows member states to impose additional conditions and limitations on a national basis.
These factors, combined with the technology being relatively nascent, means businesses who are contemplating deploying behavioural biometrics will need to ensure their processing is in line with the developing rules for the technology under the GDPR.
Finally, given the scale of the personal data being collected, the potential for misuse and the ongoing questions as to whether behavioural biometrics’ effectiveness can be replicated in less-invasive methods, companies can expect to be scrutinised by regulators across Europe who have been tasked with examining whether this technology’s benefits can outweigh the impacts on the privacy of individual citizens.
By John Magee, John O’Connor and David Cullen, with Alex Towers contributing
John Magee and John O’Connor are partners in William Fry’s technology group, which is led by partner David Cullen. Alex Towers also works with the technology team, which advises Irish start-ups and established international brands on technology matters such as data protection, intellectual property, licensing, outsourcing and e-commerce.
A version of this article originally appeared on the William Fry blog.