Apple’s new tech for targeting child abuse images draws concern

6 Aug 2021

Image: © prima91/Stock.adobe.com

Even a ‘carefully thought-out and narrowly scoped backdoor is still a backdoor’, claim critics of Apple’s new child protection measures.

Apple is rolling out new child protection efforts, with technology designed to detect images of child sexual abuse material (CSAM) on iOS systems. But while some are championing the decision, others fear the Apple CSAM features are opening the gates to privacy issues.

Three features are set to be introduced for child protection: barriers around searching for CSAM, parental alerts for explicit images on a child’s phone, and alerting law enforcement if CSAM is being collected in a user’s iCloud photos.

While these features have currently only been announced for the US, an Apple blog post said that the company’s “efforts will evolve and expand over time”.

The features were criticised on Twitter by Matthew Green, a cryptographer and security researcher at Johns Hopkins University. “This sort of tool can be a boon for finding child pornography in people’s phones,” he wrote. “But imagine what it could do in the hands of an authoritarian government?”

Non-profit digital rights group Electronic Frontier Foundation (EFF) was equally critical. “To say that we are disappointed by Apple’s plans is an understatement,” the group wrote.

“Apple can explain at length how its technical implementation will preserve privacy and security in its proposed backdoor, but at the end of the day, even a thoroughly documented, carefully thought-out and narrowly scoped backdoor is still a backdoor.”

But what are the Apple CSAM features and why are they drawing such heat?

Inappropriate messaging and search queries

The first is relatively straightforward – there will be new resources within Siri and Search for reporting CSAM along with the ability to intervene if users search for CSAM. A message will pop up explaining why the search is harmful and users will be guided to resources that can help.

The second feature is based around messaging. If a user on a family account is under the age of 13 and they send or receive an explicit image, it will initially be blurred out. The child will receive a notification letting them know that this form of material can be harmful and that their parents will be alerted if they choose to view or send the image.

Machine learning will be used to determine which images are sexually explicit and subsequently determine which require flagging with a parent. Apple said on-device machine learning will be used and the company “does not get access to the messages”.

But the EFF highlighted that even a well-intentioned effort can lead to problems down the line. “All it would take to widen the narrow backdoor that Apple is building is an expansion of the machine learning parameters to look for additional types of content, or a tweak of the configuration flags to scan, not just children’s, but anyone’s accounts,” wrote the group.

It was also critical of the use of machine learning technology to detect explicit content, which can be difficult to audit, pointing to the time Facebook flagged an image of the Little Mermaid statue for nudity rules.

Detecting CSAM

The final piece of the puzzle is the CSAM detection technology. This is the most technical of the features and uses Apple’s new NeuralHash technology. This is designed to flag material that is uploaded to the cloud if it matches with existing CSAM content in the database of the National Centre for Missing and Exploited Children (NCMEC) in the US.

This process is carried out on the user’s device. Images will be translated into an unreadable string of letters and numbers, known as a hash, that is on the phone and is unique to the characteristics of that photo. The phone will also carry another set of hashes that represent the child abuse images from the database of NCMEC and other child safety organisations.

If these two hashes match, the phone will generate a cryptographic safety voucher with the match result. The company said that “private set intersection allows Apple to learn if an image hash matches the known CSAM image hashes, without learning anything about image hashes that do not match” and that this technology is key to maintaining privacy.

Apple uses a cryptographic principle called “threshold secret sharing”, meaning that contents of the vouchers cannot be interpreted by the company unless the iCloud account crosses a threshold of known CSAM content.

Once this threshold is crossed, Apple will have access to the photos and will manually check for the presence of CSAM. If it confirms that there are child abuse images, the user account will be disabled and NCMEC will be alerted.

Users will have the ability to appeal the decision, but Apple claimed that the system has “an extremely low error rate of less than one in 1trn account[s] per year”.

But Green highlighted on Twitter that the technology is very sophisticated for what is being accomplished, and that  “eventually it could be a key ingredient in adding surveillance to encrypted messaging systems”. The EFF added that “this is a decrease in privacy for all iCloud Photos users”.

These criticisms echo previous debates in the EU for features targeting CSAM, as recent measures were met with mixed responses.

It is unclear to what degree these features will roll out worldwide, but Benny Pinkas of Bar-Ilan University, who reviewed Apple’s new technology, said it may provide a solution to “a very challenging problem”.

Sam Cox was a journalist at Silicon Republic covering sci-tech news

editorial@siliconrepublic.com