AI for policing: Where should we draw the line?

10 Aug 2022

Image: © Alexander/Stock.adobe.com

There are clear benefits to AI being used by police, but a number of human rights groups and AI experts have expressed concerns about the potential misuse of this tech.

Law enforcement agencies around the world are considering the use of artificial intelligence to prevent and solve crimes.

There are various ways that AI technology can be used in policing, such as surveillance, crowd monitoring, analytics on likely areas for crime and facial recognition technology to identify criminals.

Facial recognition is already being used by law enforcement groups, particularly in China and the US. The US Department of Homeland Security is also pushing for the international sharing of biometric data.

Its international data-sharing programme aims to create a “biometric and biographic information-sharing capability to support border security and immigration vetting”, according to documents recently shared by Statewatch.

The discussion on AI and policing has spread into Ireland in recent months, as the Department of Justice aims to allow for the use of facial recognition tech by An Garda Síochána under planned regulation.

On the back of this news, Fianna Fáil Senator Malcolm Byrne noted the benefits of facial recognition technology when used by law enforcement, along with the risks of its use.

“For instance, in India up to 10,000 children who were missing were identified using facial recognition technology,” Byrne said in a Seanad debate last month.

“In contrast, however, we have China, which has effectively become a surveillance society through the use of facial recognition technology. It is important that if we deploy this technology, it is done with full public consultation and informed by human rights and ethics.”

‘We need to figure out what kind of society we want to live in’
– SERGE DROZ

Serge Droz, an IT security expert and former chair of FIRST (Forum for Incident Response and Security Teams), told SiliconRepublic.com that there is a “tremendous value” in using AI for law enforcement purposes, but there is also “a multitude of risks” that need to be considered.

“In society, we always have to find a balance between privacy and security. Human rights guarantee you both and it’s a dilemma,” Droz said. “Technology tends to kind of shift these things and that’s something we need to discuss.”

Privacy issues

Privacy is one of the main issues that is raised when it comes to AI technology in policing. The Irish Council for Civil Liberties (ICCL) issued a statement in May against the use of facial recognition technology by Gardaí, with privacy being one of the big concerns.

The group said it is aligned with more than 170 organisations around the world that are calling for “an outright ban on biometric surveillance in public spaces”.

“[Facial recognition technology] and other biometric surveillance tools enable mass surveillance and discriminatory targeted surveillance,” the ICCL said. “They have the capacity to identify and track people everywhere they go, undermining the right to privacy and data protection, the right to free assembly and association, and the right to equality and non-discrimination.”

Droz also highlighted the potential abuse of this sort of technology, with a risk of “over-policing” by law enforcement agencies.

“Being able to track some through an entire city is quite invasive into privacy,” Droz said. “So that’s totally okay for serious crimes, but we don’t really want law enforcement to do this for someone that didn’t pay a parking ticket.”

Regulation loopholes

Dr Kris Shrishak, a technology fellow at the ICCL, previously spoke to SiliconRepublic.com about the challenges of regulation when it comes to facial recognition technology.

One key issue he raised is that a lot of this tech is being developed by large, privately owned companies such as Clearview AI, which could find loopholes in regulations.

For example, companies can make the claim that there is no facial recognition tech within their CCTV cameras, but the images can still be moved onto servers that use this technology.

Speaking again to SiliconRepublic.com, Shrishak said this can also apply to cameras used in law enforcement drones. This is a topic that may arise in Ireland, as the Irish Defence Forces teamed up with scientists to develop a new drone capable of policing the Irish offshore area, The Irish Times recently reported.

“Even when drones themselves cannot perform a lot of computation or have limited storage, the recordings from the drones can be sent to and analysed on servers with powerful capability,” Shrishak said.

While the EU and other nations are working to regulate AI, the speed at which technology is developing can lead to gaps in regulation and create a risk of the tech being misused.

Theresa Kushner is a data expert who works at NTT Data as the head of its North America Innovation Centre. Kushner said it’s important to not only consider the technology, but “how we as people deal with it”.

“I always use the example of a self-driving car. Self-driving cars are wonderful until a car hits somebody, then who is responsible?” Kushner said. “There’s nobody driving the car and our environments, the police and all of our procedures, have not caught up to the technology itself.

“So when you start looking at facial recognition, especially in law enforcement, to be able to identify people that did something wrong or whatever you’re trying to identify, you run a lot of risk.”

The risk of bias

Advances in AI and machine learning are opening up new possibilities for police forces, including the ability to predict future crimes.

In June, data and social scientists at the University of Chicago said they developed a new algorithm that could predict crimes one week in advance with roughly 90pc accuracy.

However, a common concern with AI models is the risk of bias within the data that systems are trained on, something which was highlighted in the study. The researchers said the algorithm should be added to a “toolbox of urban policies and policing strategies”, rather than a means to direct law enforcement.

Noel Hara, public sector CTO of NTT Data, said the risk of racial bias is “a very big deal” with AI and machine learning, due to models being trained on potentially biased data. In 2020, an MIT image library used to train AI was withdrawn after it was found to contain racist and misogynistic terms.

“If the statistics show that people of a certain racial group commit more crimes, the system will automatically, if it’s not trained properly, score people of that ethnic group at a higher score and it’ll call them up as being more likely to be the perpetrator,” Hara said.

What society do we want?

AI has attracted the attention of regulators, with its use in law enforcement highlighted in EU proposals.

The EU’s draft regulations suggest putting AI under four risk categories, with unacceptable risk systems – those seen as a clear threat to the safety, livelihoods and rights of people – being banned outright. This includes so-called social credit scores, such as a controversial system seen in China, and applications that manipulate human behaviour.

‘High-risk’ use cases would include the use of AI in critical infrastructure, law enforcement, migration and border patrol, employment and recruitment, and education. These use cases would be restricted but not banned, and the proposals were criticised by European watchdogs for not going far enough when it comes to live facial recognition in public places.

The global discussion on how AI should be implemented by law enforcement is far from over. Groups like the ICCL are calling for a full ban, while other experts like Hara believe this technology should be used, but in a responsible way.

Hara gave the example of festivals or large events, where people could “opt in” to exchange their right to privacy in order to get convenient benefits, such as facial recognition being used instead of a ticket to speed up queue times. This sort of technology was reportedly used for events in Germany last year.

“But it’s also allowing the authorities to know if there’s a bad actor, they’re able to go in and manage that situation,” Hara said. “But people are opting in in that situation, they’re not just walking down a public street.”

Droz said that politicians and the public have to decide what sort of balance they want with the use of this technology, as it’s “not a technical problem” but a societal one.

“We need to figure out what kind of society we want to live in,” Droz said. “If you go to China, facial recognition is everywhere, you drop your bottles into the recycling bin, a picture is taken, you get social credits. If you jaywalk, your picture is taken and you deduct your social credits.

“Do we want to live in a society like this? Personally, no. But let’s face it, in China there’s a lot more recycling.”

10 things you need to know direct to your inbox every weekday. Sign up for the Daily Brief, Silicon Republic’s digest of essential sci-tech news.

Leigh Mc Gowran is a journalist with Silicon Republic

editorial@siliconrepublic.com