What to expect if the EU passes the AI Act

12 Jun 2023

Deirdre Clune MEP. Image: Deirdre Clune

EU documents suggest amendments are in place to tackle new AI models like ChatGPT, but biometric surveillance remains a contentious issue.

After years of development, the EU’s AI Act is close to becoming a reality with a plenary vote taking place this week.

If the vote is successful, the EU could create landmark piece of legislation to rein in various advanced technologies, including generative AI systems, biometric surveillance and AI products in certain sectors such as transport and healthcare.

The AI Act is expected to include various new rules to manage general-purpose AI and foundation models, which have grown in popularity with the rise of products like ChatGPT. The act is also expected to create an EU AI Office that will oversee how the new rules are implemented.

But while support has been strong around the creation of the act, certain aspects have led to divisions in parliament, Euractiv reports.

Draft documents seen by SiliconRepublic.com give an idea of what the AI Act hopes to accomplish and which sectors will be monitored. These documents come from the end of May, so certain aspects may be altered by the time the final vote takes place later this week.

Biometric contention

One controversial section of technology that is being addressed in the act is biometric identification, which can be used to identify somebody through measures like facial recognition and retinal scans.

The AI Act has sections in place to ban the general use of these biometric systems, due to concerns around surveillance and privacy breaches.

However, since amendments were able to be submitted until 7 June, there is currently contention in the EU, as some parliament groups want a full ban in all cases while others want the technology to be available for certain cases.

Deirdre Clune is a Fine Gael MEP in the European People’s Party (EPP) and a lead negotiator on the AI Act for her party. Speaking to SiliconRepublic.com, Clune said she would support the use of biometric surveillance for specific use cases.

“I think it should be there in very limited circumstances, child abduction issues, terrorist [or] serious criminal offenses,” Clune said. “With supervision and with judicial approval.”

Patrick Breyer, MEP of the EU Pirate Party, claims biometric real-time surveillance has never prevented a terrorist attack “or other events of this kind”.

“France and Hamburg are threatening to introduce technology that will automatically report us for ‘anormal’ behaviour to the police,” Breyer said. “Such suspicion machines wrongly report countless citizens, are discriminatory, educate to conformist behaviour and are absolutely no good for arresting criminals.”

In November, a group of MEPs said they would not support the AI Act if it does not include a full ban on the use of biometric surveillance.

Foundational AI additions

One of the biggest changes in the AI sector this year has been the rise of generative AI systems like ChatGPT, the advanced chatbot that saw multiple tech giants delve deeper into AI technology.

The impact reportedly caused delays in the AI Act too, as lawmakers had to make adjustments to the legislation to include these systems properly.

“These systems were not even fully in use when the Act was first drafted,” Clune said. “In negotiating these new rules on behalf of the EPP Group, I was clear that foundation models such as ChatGPT must be addressed.”

“These models have the potential to revolutionise many areas of life, but also pose significant risks. They must be subject to proper scrutiny and oversight to prevent any harm to individuals or society as a whole. ”

The draft legislation refers to foundation models as a recent development where AI models are “developed from algorithms designed to optimise for generality and versatility of output”.

The documents have references to foundational models being provided to other companies through API access and says “cooperation” should be available between the provider and the recipient to enable “appropriate risk mitigation”.

The draft legislation also states that it is “essential to clarify the role of actors” contributing to the development of AI systems, as there is uncertainty around how foundation models will evolve in the future.

Due to the risks involved, the legislation calls for foundation models to have certain obligations, such as having to “assess and mitigate possible risks and harms through appropriate design, testing and analysis”. Generative models should also be transparent in showing that content was created by an AI system instead of a human.

“These specific requirements and obligations do not amount to considering foundation models as high-risk AI systems, but should guarantee that the objectives of this regulation to ensure a high level of protection of fundamental rights, health and safety, environment, democracy and rule of law are achieved,” the EU document states.

Monitoring high-risk systems

One of the biggest impact the AI Act is expected to have in the sector is to introduce measures of risk on specific AI systems.

Certain technologies that are expected to receive a full ban as a result of unacceptable risk include predictive policing systems, emotion recognition systems and real-time biometric surveillance.

The legislation will also categorise certain systems as “high-risk”. The draft documents refer to high-risk systems as those that pose a “significant risk of harm to the health and safety or the fundamental rights of persons and, where the AI system is used as a safety component of a critical infrastructure, to the environment.”

The act will also promote the use of controlled environments, or sandboxes, to test AI systems with support by public authorities before they are deployed to the wider public.

Clune said that high-risk AI systems are not necessarily “a bad thing” and that “we don’t want to discourage people from developing AI to use in those areas”. But she added that if a system is identified as high-risk, it needs to comply with regulation due to the dangers they present to individuals.

“It will be supervised,” Clune said. “The data used to develop the algorithm will be submitted to the regulators and it will be supervised and there will be much more transparency.”

10 things you need to know direct to your inbox every weekday. Sign up for the Daily Brief, Silicon Republic’s digest of essential sci-tech news.

Leigh Mc Gowran is a journalist with Silicon Republic

editorial@siliconrepublic.com