Facial-recognition software used by UK police is revealed to be highly inaccurate.
Civil liberties group Big Brother Watch today (15 May) published a report outlining serious claims about the accuracy of facial-recognition tools employed by UK law enforcement bodies.
Computer databases of faces are linked to CCTV and other cameras, and many see facial recognition as a positive advancement in terms of law enforcement. Meanwhile, privacy advocates have concerns around the technology being implemented.
Two police forces testing facial recognition
The report authors submitted freedom-of-information requests to every UK police force, and both the Met and South Wales forces said they were testing facial-recognition technology. South Wales Police said the technology has made 2,685 matches between May 2017 and March 2018, but 2,451 were false alarms.
The product used by both police forces is called NeoFace Watch, made by Japanese firm NEC.
Big Brother Watch found that the system, used by the Met at the 2017 Notting Hill Carnival, was wrong 98pc of the time. Officers were falsely told 102 times a suspect had been spotted. South Wales Police was granted more than £2m in government funding to test the system, but it was 91pc inaccurate in their testing.
On 31 occasions, police followed up with people of concern only to find innocent people had been stopped due to false identifications.
Walking ID cards
Big Brother Watch said: “Automated facial-recognition cameras are biometric identification checkpoints that risk making members of the public walking ID cards.
“It is plainly disproportionate to deploy a technology by which the face of every passer-by is analysed, mapped and their identity checked.”
The UK’s independent biometrics commissioner, Paul Wiles, told The Independent that the technology is “not yet fit for use” judging by the figures outlined in the report.
Wiles also called for more regulation around the use of such systems. “In terms of governance, technical development and deployment is running ahead of legislation, and these new biometrics urgently need a legislative framework, as already exists for DNA and fingerprints.”
More oversight needed
Senior staff attorney at the EFF, Jennifer Lynch, warned of the lack of oversight in terms of biometric systems. “The adoption of technologies like these is occurring without meaningful oversight, without proper accuracy testing and without the enactment of legal protections to prevent misuse.
“If we move forward on this path, these systems will mistakenly identify innocent people as criminals or terrorists, and will be used by unscrupulous governments to silence unwelcome voices.”
Police defend testing
Both the South Wales and Met Police forces have defended the use of the technology. Deputy chief constable of South Wales police, Richard Lewis, said: “When we first deployed and we were learning how to use it … some of the digital images we used weren’t of sufficient quality. Because of the poor quality, it was identifying people wrongly. They weren’t able to get the detail from the picture.”
He added that safeguards are in place to prevent action being taken against innocent people. A Met Police spokesperson said that all alerts on its watchlist were deleted after 30 days and faces that do not generate an alert are immediately deleted.
The UK information commissioner, Elizabeth Denham, said that police need to demonstrate the efficacy of facial-recognition technology when less intrusive methods are not available. “Should my concerns not be addressed, I will consider what legal action is needed to ensure the right protections are in place for the public.”
The UK home office told the BBC it plans to publish its biometric strategy in June. The Scottish government commissioned and published a report into the use of biometrics in March of this year.