After years of negotiations, the AI Act is finally here to rein in this popular technology, but it still faces some criticisms.
The EU’s long-awaited rules to regulate the growing AI sector are finally here, as the AI Act was officially adopted in a vote today (13 March).
MEPs voted overwhelmingly in favour of adopting the Act, with 523 supporting it while only 46 voted against it – and 49 abstaining. The vote marks an end to negotiations and hurdles since the legislation was first discussed in 2021.
The result means the EU will soon have arguably the most robust and detailed form of AI regulation in the world, in a bid to rein in the high-risk aspects of this evolving technology.
Irish MEP Deirdre Clune and lead lawmaker for the drafting of the Act, said it might be the most significant piece of legislation to come from the European Parliament “in the past 5 years”, as AI will “fundamentally alter how we all live our lives”.
“We cannot allow AI to grow in an unrestricted and unfettered manner,” Clune said. “This is why the EU is actively implementing safeguards and establishing boundaries.
“The objective of the AI Act is simple – to protect users from possible risks, promote innovation and encourage the uptake of safe, trustworthy AI in the EU.”
Companies still have time to prepare as the AI Act will enter into force 20 days after its publication in the Official Journal and will be fully applicable after two years, though with some prohibitions taking effect after six months and some governance rules and obligations taking effect after 12 months.
What will the AI Act do?
In simple terms, the AI Act will attempt to rein in AI technology while letting the EU benefit from its potential by creating a risk-based approach. If the type of AI technology is deemed to be high-risk, then the developers must follow stricter rules to prevent its abuse.
The Act will also prohibit certain uses of AI entirely such as the use of social scoring systems – which has become associated with the controversial social credit system in China. Other “forbidden” use cases are techniques that use AI to manipulate people in a way that “impairs their autonomy, decision-making and free choices”.
The AI Act will also call on deployers of AI systems to clearly disclose if any content has been artificially created or manipulated by AI – to deal with the threat of deepfakes.
Specific details of the AI Act were under contention since the end of 2023, as certain EU countries called for more relaxed regulations on the developers of foundation models – due to concerns that stricter regulation could hamper innovation. These issues were resolved last month after a series of negotiations.
‘Smart AI legislation’
The AI Act is being praised by various experts and companies within the AI sector. Bruna de Castro e Silva, AI governance specialist at Saidot, said the Act is the culmination of “extensive research, consultations and expert and legislative work” and said it is founded on a “solid risk-based approach”.
“The Act will ensure that AI development prioritises the protection of fundamental rights, health and safety, while maximising the enormous potential of AI,” Silva said. “This legislation is an opportunity to set a global standard for AI governance, addressing concerns while fostering innovation within a clear responsible framework.
“While some seek to present any AI regulation in a negative light, the final text of the EU AI Act is an example of responsible and innovative legislation that prioritises technology’s impact on people.”
Christina Montgomery, IBM VP and chief privacy and trust officer, commended the EU and said it passed “comprehensive, smart AI legislation”.
“The risk-based approach aligns with IBM’s commitment to ethical AI practices and will contribute to building open and trustworthy AI ecosystems,” Montgomery said.
The passing of the AI Act is also expected to have an impact on the global stage. Forrester principal analyst Enza Iannopollo said most companies in the UK will need to comply with the AI Act if they wish to do business internationally, “just like their counterparts in the US and Asia”.
“Despite the aspiration of becoming the ‘centre of AI regulation’, the UK has produced little so far when it comes to mitigating AI risks effectively,” Iannopollo said. “Hence, companies in the UK will have to face two very different regulatory environments to start with.
“Over time, at least some of the work UK firms undertake to be compliant with the EU AI Act will become part of their overall AI governance strategy, regardless of UK specific requirements – or lack thereof.”
Criticisms of the AI Act
The Act is facing some criticisms, however, particularly from the EU’s Pirate Party which has been vocal for months about the Act allowing member states to use biometric surveillance – such as facial recognition technology.
The Act states that using AI for real-time biometric surveillance in publicly accessible spaces should be prohibited – “except in exhaustively listed and narrowly defined situations, where the use is strictly necessary to achieve a substantial public interest”. The examples of such situations include finding missing people and specific threats such as terrorist attacks
MEP Patrick Breyer claims that the AI Act means the European Parliament is “legitimising” biometric mass surveillance.
“Rather than protecting us from these authoritarian instruments, the AI Act provides an instruction manual for governments to roll out biometric mass surveillance in Europe,” Breyer said. “As important as it is to regulate AI technology, defending our democracy against being turned into a high-tech surveillance state is not negotiable.”
Dr Kris Shrishak, a technology fellow at the Irish Council for Civil Liberties, told SiliconRepublic.com last month that the AI Act had been improved but “does not set a high bar for protection of people’s rights”. He also claimed that the Act relies too much on “self-assessments” when it comes to risk.
“Companies get to decide whether their systems are high risk or not,” Shrishak said. “If high risk, they only have to perform self-assessment. This means that strong enforcement by the regulators will be the key to whether this regulation is worth its paper or not.
“The regulation of general-purpose AI is mostly limited to transparency and is likely to be inadequate to address the risks that these AI systems pose.”
Find out how emerging tech trends are transforming tomorrow with our new podcast, Future Human: The Series. Listen now on Spotify, on Apple or wherever you get your podcasts.