The AI Act looms closer, but will it be enough?

13 Feb 2024

Image: © Alexey Novikov/Stock.adobe.com

The EU’s Pirate Party is opposing the AI Act over its stance on biometric surveillance, while Dr Kris Shrishak of the ICCL says the act relies too much on ‘self assessment’ by companies.

Click here to view the full AI and Analytics Week series.

The EU has moved one step closer to bringing in a landmark piece of legislation that aims to rein in potential dangers of AI technology.

Earlier today (13 February), MEPs from two European Parliament committees endorsed the provisional agreement of the AI Act, which has been in development since 2021.

A draft version of the legislation was approved by Parliament last June, but the technical details of the act faced disruptions towards the end of 2023, when certain EU countries such as France and Germany wanted more relaxed rules on certain forms of AI to cater to innovation.

After the technical details were agreed upon last December, an updated proposal was agreed to earlier this month. But certain aspects of the act have been met with criticism, as MEPs from the Pirate Party claim they will not support it due to its rules around biometric surveillance.

Dr Kris Shrishak, a technology fellow at the Irish Council for Civil Liberties, said the act has been “improved” through the legislative process but that it “does not set a high bar for protection of people’s rights”.

“It relies on technical standardisation organisations to translate legal text to technical requirements and to address socio-technical issues such as bias, for which they are not well-equipped,” Shrishak said.

A risk-based approach

A key focus of the current draft of the AI Act is to rein some of the more extreme uses of AI by creating a system that ranks uses of the technology based on potential risk. AI systems that have a higher risk will have to follow stricter rules, while some forms of AI technology are prohibited entirely.

One key example of prohibited technology is the concept of a social scoring system – which has become associated with the controversial social credit system in China. Other “forbidden” use cases are techniques that use AI to manipulate people in a way that “impairs their autonomy, decision-making and free choices”.

Examples of these manipulative tactics include AI systems that deploy “subliminal components” or AI systems that exploit a person’s vulnerabilities, such as their age, a disability or being in a “specific social or economic situation”.

The AI Act also calls on deployers of AI systems to clearly disclose if any content has been artificially created or manipulated by AI, through the use of labels. This part of the legislation aims to deal with the threat of deepfakes.

But Shrishak took issue with the AI Act’s risk-based approach and claimed that at its core, the act “relies on self-assessments”.

“Companies get to decide whether their systems are high risk or not,” he said. “If high risk, they only have to perform self-assessment. This means that strong enforcement by the regulators will be the key to whether this regulation is worth its paper or not.

“The regulation of general purpose AI is mostly limited to transparency and is likely to be inadequate to address the risks that these AI systems pose.”

Biometric surveillance

But some of the biggest criticisms the AI Act is facing is how it addresses the use of biometric surveillance – such as facial recognition technology.

The AI Act states that using AI for real-time biometric surveillance in publicly accessible spaces is “particularly intrusive to the rights and freedoms of the concerned persons” and states that it should be prohibited – “except in exhaustively listed and narrowly defined situations, where the use is strictly necessary to achieve a substantial public interest”.

The examples of such situations include finding missing people and specific threats such as terrorist attacks. Using biometric identification that is not in real time should only be used in a way that is “proportionate, legitimate and strictly necessary” according to the act.

Shrishak noted that although facial recognition technology is allowed under a “certain set of conditions”, each EU country can choose to be more restrictive and that “countries have the possibility to go for a full prohibition” of this form of technology.

A draft law for the use of facial recognition technology was published in Ireland towards the end of 2023 after the Dublin riots. Minister for Justice Helen McEntee, TD, said that facial recognition technology will “dramatically” save time, speed up investigations and free up resources for An Garda Síochána.

Meanwhile, the AI Act also allows for the use of emotion recognition technology in specific cases, which Shrishak described as “a pseudoscience”. This type of biometric surveillance was criticised by a UK watchdog in 2022. That same year, Microsoft said experts “inside and outside the company” highlighted issues with this form of technology.

The AI Act states that using AI systems to detect the emotional state of individuals in situations related to the workplace and education should be prohibited, but that this prohibition should not cover AI systems being used “strictly for medical or safety reasons, such as systems intended for therapeutical use”.

The EU Pirate Party said it will oppose the AI Act due to its ability to let member states use biometric surveillance. MEP Patrick Breyer claimed the AI Act will provide “an instruction manual for governments to roll out biometric mass surveillance in Europe”.

“Any public space in Europe can be placed under permanent biometric mass surveillance on these grounds,” Breyer said. “This law legitimises and normalises a culture of mistrust. It leads Europe into a dystopian future of a mistrustful high-tech surveillance state.”

Find out how emerging tech trends are transforming tomorrow with our new podcast, Future Human: The Series. Listen now on Spotify, on Apple or wherever you get your podcasts.

Leigh Mc Gowran is a journalist with Silicon Republic

editorial@siliconrepublic.com