Apple defends tech for detecting child abuse images amid privacy concerns

10 Aug 2021

Image: © misu/Stock.adobe.com

After concerns were raised over data privacy and potential misuse, Apple said it would refuse any government demands to use its new tech for surveillance.

Apple has responded to criticism of its new tech to target child abuse images by releasing a document that aims to address some of the privacy concerns raised.

Planned child protection measures on iOS, announced last week, include barriers around searching for child sexual abuse material (CSAM) and parental alerts for explicit images on a child’s phone. But they also include new technology designed to detect CSAM images using cryptographic principles and alert law enforcement if CSAM is being collected in a user’s iCloud.

Critics have been quick to point out the privacy implications of such a move as the tech could potentially be expanded beyond CSAM for surveillance or other purposes. Non-profit digital rights group the Electronic Frontier Foundation said that “at the end of the day, even a thoroughly documented, carefully thought-out and narrowly scoped backdoor is still a backdoor”.

‘Let us be clear, this technology is limited to detecting CSAM stored in iCloud and we will not accede to any government’s request to expand it’
– APPLE

Now Apple has come out with an FAQ document that explains its proposed features in greater detail and attempts to answer some of the questions raised, including whether governments could ask Apple to use the tech to spy on people.

Apple said it would “refuse any such demands” from governments to add non-CSAM images to its new tech process.

“We have faced demands to build and deploy government-mandated changes that degrade the privacy of users before and have steadfastly refused those demands. We will continue to refuse them in the future,” the company added.

“Let us be clear, this technology is limited to detecting CSAM stored in iCloud and we will not accede to any government’s request to expand it.”

Data privacy concerns

The company also made clarifications around its communication safety and CSAM detection features, noting that the technologies behind the two measures are distinct.

Its new communications safety feature, which is designed to prevent children from sharing and receiving sexually explicit images, requires parents to opt in and turn on the feature. It can only be enabled by holders of family accounts for users under the age of 13.

Apple said that the feature does not give it access to communications in Messages and does not break end-to-end encryption.

“If the feature is enabled for the child account, the device will evaluate images in Messages and present an intervention if the image is determined to be sexually explicit,” it said.

“For accounts of children aged 12 and under, parents can set up parental notifications which will be sent if the child confirms and sends or views an image that has been determined to be sexually explicit.”

For children between the ages of 13 and 17, warnings will still be shown before viewing explicit material, but parents will not be notified, Apple said.

For the CSAM detection feature, Apple clarified that the technology only applies to photos uploaded to iCloud and the company will not be able to scan images stored in a device’s photo gallery.

“Existing techniques as implemented by other companies scan all user photos stored in the cloud,” it added. “This creates privacy risk for all users.”

The new process translates images into an unreadable string of letters and numbers, known as a hash, and will look to match these with another set of hashes that represent known CSAM images from child safety organisations.

“Using new applications of cryptography, Apple is able to use these hashes to learn only about iCloud Photos accounts that are storing collections of photos that match to these known CSAM images and is then only able to learn about photos that are known CSAM, without learning about or seeing any other photos,” the company said.

It added that a human review is conducted before any user accounts are flagged with law enforcement, and the likelihood that the machine learning system would incorrectly flag an account is “less than one in one trillion” per year.

Vish Gain is a journalist with Silicon Republic

editorial@siliconrepublic.com