EU unveils game plan to rein in ‘high-risk’ AI

21 Apr 2021

EU commissioner Margrethe Vestager. Image: Claudio Centonze/European Commission

The new set of proposals will classify different AI applications depending on their risks and implement varying degrees of restrictions.

After weeks of leaks, the European Commission has now unveiled its plan for regulating artificial intelligence.

Much like how the EU took the baton on data protection laws, the commission is hoping to set new standards for oversight on artificial intelligence in a bid to create what it calls “trustworthy AI”.

Restrictions will be introduced on uses of the technology that are identified as high-risk, with potential fines for violations. Fines could be up to 6pc of global turnover or €30m, depending on which is higher.

‘On artificial intelligence, trust is a must, not a nice-to-have’
– MARGRETHE VESTAGER

The legal framework will be implemented through a coordination plan among member states.

The regulations will consider AI under four different categories: unacceptable risk, high-risk, limited risk and minimal risk.

In cases of unacceptable risk, AI systems considered a clear threat to the safety, livelihoods and rights of people will be banned. This includes so-called social credit scores, such as a controversial system seen in China, and applications that “manipulate human behaviour”.

High-risk use cases includes the use of AI in critical infrastructure, law enforcement, migration and border patrol, employment and recruitment, and education.

It stipulates that these applications implement strict security controls, maintain logs of how the tech is used for auditing, and provide data to users on how the AI operates. Furthermore, it requires some human oversight of the technology in use.

This category still allows for the use of “remote biometric identification systems”, such as live facial recognition, subject to strict requirements. Live use of this tech in publicly accessible spaces for law enforcement purposes is prohibited in principle, the commission said, but narrow exceptions are strictly defined and regulated.

Some MEPs and civil society groups were pushing for an outright ban on the use of live facial recognition in public spaces.

Instead, the commission said that “such use is subject to authorisation by a judicial or other independent body and to appropriate limits in time, geographic reach and the databases searched”.

Lower risks

The category of limited risk requires that systems be transparent about the fact that AI is operating. In one example, a chatbot should make it known to a user that they are interacting with a machine and not a human.

The commission said that the “vast majority of AI systems” fall into the minimal risk category. This allows for rudimentary uses to operate largely freely, such as AI-enabled spam filters.

“On artificial intelligence, trust is a must, not a nice-to-have. With these landmark rules, the EU is spearheading the development of new global norms to make sure AI can be trusted,” Margrethe Vestager, executive vice-president for a Europe fit for the Digital Age, said.

“By setting the standards, we can pave the way to ethical technology worldwide and ensure that the EU remains competitive along the way. Future-proof and innovation-friendly, our rules will intervene where strictly needed: when the safety and fundamental rights of EU citizens are at stake.”

Vestager, who is also competition commissioner, announced the proposals today (21 April) alongside Thierry Breton, commissioner for the internal market.

The commissioners have proposed the creation of a European Artificial Intelligence Board to implement the rules with member states’ authorities enforcing them. They went on to suggest establishing voluntary codes of conduct for non-high-risk AI and regulatory sandboxes to allow for AI testing and developing but with oversight.

The commission’s plan will align the 27 member states on how to enact and follow the new rules, once they are passed.

“Today’s proposals aim to strengthen Europe’s position as a global hub of excellence in AI from the lab to the market, ensure that AI in Europe respects our values and rules, and harness the potential of AI for industrial use,” Breton said.

Next steps for the proposal require adoption by the European Parliament and member states.

Jonathan Keane is a freelance business and technology journalist based in Dublin

editorial@siliconrepublic.com