Committee on AI says EU has ‘fallen behind’ in global tech leadership race

23 Mar 2022

Image: © Lulla/Stock.adobe.com

The EU needs to act as a ‘global standard-setter’ in AI, according to a new report that also warned about the risks of mass surveillance.

A new EU report says public debate on the use of artificial intelligence (AI) should focus on the technology’s “enormous potential” to complement humans.

The European Parliament’s special committee on artificial intelligence in a digital age adopted its final recommendations yesterday (22 March) after 18 months of inquiries. The committee’s draft text notes that the world is on the verge of “the fourth industrial revolution” from an abundance of data combined with powerful algorithms.

Future Human

But it adds that that the EU has “fallen behind” in the global race for tech leadership, which poses a risk that tech standards could be developed in the future by “non-democratic actors”. Due to this risk, it recommends that the EU should act as a “global standard-setter” in AI.

“We neither take the lead in development, research or investment in AI,” the text states. “If we do not set clear standards for the human-centred approach to AI that is based on our core European ethical standards and democratic values, they will be determined elsewhere.”

MEPs of the special committee identified policy options that could unlock AI’s potential in various areas, such as health, the environment and combating global hunger. They believe that if combined with “the necessary support infrastructure, education and training”, AI can increase labour productivity, innovation, sustainable growth and job creation.

“The EU now has the unique chance to promote a human-centric and trustworthy approach to AI based on fundamental rights that manages risks while taking full advantage of the benefits AI can bring for the whole of society,” MEP Axel Voss said. “We need a legal framework that leaves space for innovation, and a harmonised digital single market with clear standards.

“We need maximum investment and a robust and sustainable digital infrastructure that all citizens can access,” Voss added.

The special committee suggests that the EU should not always regulate AI as a technology, but that regulation should be “proportionate to the type of risk associated with using an AI system in a particular way”.

The report was adopted by the special committee with 25 votes for and two against, with six abstentions. The draft text aims to help establish an AI roadmap for the EU up to 2030.

Last April, the European Commission proposed new standards to regulate AI in a bid to create what it calls “trustworthy AI”. These proposals seek to classify different AI applications depending on their level of risk and implement varying degrees of restrictions.

Mass surveillance risks

The draft text from the European Parliament’s special committee also references the ethical and legal questions that AI technology could pose in the future, with challenges such as reaching a consensus on the responsible use of AI, or military research into autonomous weapon systems.

One key risk highlighted is that AI can enable the autonomous collection of data “to an unprecedented scale”. This could pave the way for mass surveillance that poses a threat to rights such as privacy and data protection.

“Authoritarian regimes apply AI systems to control, exert mass surveillance and rank their citizens, or restrict freedom of movement,” the committee said in a statement. “Dominant tech platforms use them to obtain more information on a person.”

Committee chair and Romanian MEP Dragoş Tudorache said the EU’s future global competitiveness depends on the rules put in place, but added that these “need to be in line” with values such as democracy and fundamental rights.

“This is paramount, as the struggle between authoritarianism and democracy is becoming more and more acute – and unfortunately more deadly, as we have seen with Russia’s unjustified invasion of Ukraine,” Tudorache said.

In recent years, concerns have been raised about facial recognition technology in terms of surveillance, privacy, consent, accuracy and bias.

Last year, the EU proposals for regulating AI were met with criticism by EU watchdogs for not going far enough when it comes to live facial recognition in public places. The European Parliament then called for a ban on biometric mass surveillance technologies, such as facial recognition tools, citing the threat these technologies can present to human rights.

Digital Markets Act

Another proposal the EU is developing to rein in Big Tech is the Digital Markets Act (DMA), which MEPs voted overwhelmingly in favour for last December. This, along with the Digital Services Act, aims to tackle the monopoly large multinationals hold in Europe’s digital space.

The DMA aims to blacklist certain practices used by large ‘gatekeeper’ platforms – companies that wield a disproportionate amount of power in their markets – and enable the European Commission to carry out market investigations and sanction non-compliant behaviours.

According to the Financial Times, the DMA could be revealed as early as tomorrow (24 March), as crucial details such as the size of companies targeted have been agreed. Rules are expected to target companies that run a core online platform service such as a social network or web browser and have a market capitalisation of at least €75bn.

10 things you need to know direct to your inbox every weekday. Sign up for the Daily Brief, Silicon Republic’s digest of essential sci-tech news.

Leigh Mc Gowran is a journalist with Silicon Republic

editorial@siliconrepublic.com