The issues with the EU’s draft regulation on facial recognition AI

17 May 2022

Image: © Alexander/Stock.adobe.com

Dr Kris Shrishak of the ICCL discusses the challenges of regulation and ‘enforcement teeth’ when it comes to facial recognition technology.

It is no surprise that the global discussion about facial recognition technology has grown in recent years. As the tech becomes more advanced, concerns have been raised in terms of surveillance, privacy, consent, accuracy and bias.

Last year, initial EU proposals for regulating AI were met with criticism by European watchdogs for not going far enough when it comes to live facial recognition in public places. After this, MEPs called for a ban on biometric mass surveillance tech, such as facial recognition tools, citing the threat these technologies could present to human rights.

Recently, the European Parliament adopted the final recommendations of a special committee, which said the debate on AI should focus on the technology’s enormous potential to complement human labour. This report will feed into the discussion of the proposed AI Act in Europe.

But during the vote, MEPs pointed out that certain AI technologies enable information processing at a massive scale, which could pave the way for potential mass surveillance and other unlawful interference in fundamental rights.

Dr Kris Shrishak, a technology fellow at the Irish Council for Civil Liberties (ICCL) who advises legislators on AI regulation, said the collection and use of biometric data such as facial recognition is “sensitive and deeply personal” and so already has “special protections” under GDPR.

Speaking to SiliconRepublic.com, Shrishak said one issue with a lot of facial recognition technology is that it is mainly being developed by large, privately owned companies that could find loopholes in regulation.

One example he gave is the use of facial recognition in CCTV cameras. Shrishak said some companies make the claim that there is no facial recognition tech within the cameras themselves, but that this hides the full picture.

“What they’re mostly saying is, the camera that you see does not perform facial recognition itself,” he explained. “But it tells you nothing about what happens to the images once they’re captured and stored. Once you move it into a server, you can always turn on your facial recognition. It’s not with the CCTV itself. So that’s always something to keep in mind.”

Last month, Shrishak called on the Irish Government to forbid the use of Dahua and Hikvision CCTV systems, as these companies are linked to human rights abuses in China.

Can regulation be enforced?

The EU is introducing new rules to rein in the power of Big Tech, with measures such as the Digital Markets Act (DMA) and the Digital Services Act looking to tackle the monopoly large multinationals hold in Europe’s digital space and hold them accountable for illegal content.

The DMA, for example, aims to blacklist certain practices used by large ‘gatekeeper’ platforms – companies that wield a disproportionate amount of power in their markets – and enable the European Commission to carry out market investigations and sanction non-compliant behaviours.

Shrishak said one of the issues he sees with regulation such as this is how it will be enforced, and he was critical of the EU’s proposals to regulate AI for this reason.

The EU plans to consider AI under risk categories, with unacceptable risk systems – those seen as a clear threat to the safety, livelihoods and rights of people – being banned outright. This includes so-called social credit scores, such as a controversial system seen in China, and applications that manipulate human behaviour.

High-risk use cases would include the use of AI in critical infrastructure, law enforcement, migration and border patrol, employment and recruitment, and education.

“The way it’s currently phrased, even the enforcement framework is very weak,” Shrishak said. “So even if this is put in place, there is not even a possibility that I would say there is a clear way to enforce things.”

Shrishak also suggested the regulation needs to do more as it does not propose an outright ban on facial recognition technology and there are still cases when law enforcement may be able to use it.

Clearview AI

A controversial company that is focused on facial recognition technology is Clearview AI, which has been facing criticism and pressure from watchdogs around the world.

Clearview AI has built a database with billions of images from the internet and works with customers such as law enforcement agencies to compare facial data against its database. The company says its tech is used to solve crimes and complies with all standards of privacy and law.

But last November, the UK Information Commissioner’s Office said Clearview’s database is “likely to include the data of a substantial number of people from the UK”, with images that may have been gathered without people’s knowledge from sources such as social media platforms.

In the same month, Australia’s top information authority ordered Clearview AI to stop collecting facial images and biometric templates of Australian citizens, and to delete what data it already has.

Australia and the UK are not the only countries where Clearview AI faces regulatory scrutiny. In February 2021, Canada’s federal privacy commissioner deemed the company’s practices illegal, saying it collected facial images of Canadians without their consent.

Clearview AI was back in the spotlight in February, when it was reported that the company told investors it is on track to have 100bn facial photos in its database within a year. This would be enough to identify “almost everyone in the world”, according to documents obtained by The Washington Post.

But despite international pressure, Shrishak said it is difficult for regulators to enforce rulings against the company as it is headquartered in the US and does not appear to have offices in other countries. It would be easier to have “enforcement teeth” if a US authority introduced laws cracking down on the technology.

Shrishak noted that an alternative crackdown method would be to restrict the ability of facial recognition tech companies to sell to organisations such as law enforcement groups.

In 2020, the American Civil Liberties Union (ACLU) of Illnois filed a lawsuit against Clearview AI, alleging it violated the privacy rights of citizens. The ACLU said the case was filed after a New York Times investigation revealed details of the company’s tracking and surveillance tools.

That lawsuit reached a settlement on 9 May this year, when Clearview AI agreed to a new set of restrictions. According to the ACLU, this includes a permanent ban in the US from making its faceprint database available to most businesses and other private entities.

“The company will also cease selling access to its database to any entity in Illinois, including state and local police, for five years,” the ACLU said.

10 things you need to know direct to your inbox every weekday. Sign up for the Daily Brief, Silicon Republic’s digest of essential sci-tech news.

Leigh Mc Gowran is a journalist with Silicon Republic

editorial@siliconrepublic.com