Google’s new free AI tool aims to help humans remove child abuse images

4 Sep 2018

Image: Evdokimov Maxim/Shutterstock

With some of the worst child sexual abuse images moderated by humans, Google hopes its new AI tool will ease some of the burden.

Content moderation has proven itself to be a minefield for companies of late, in particular one global social network.

Regardless of the company, however, some of the worst the internet has to offer is often left to humans to moderate, such as child sexual abuse images.

While existing artificial intelligence (AI) software has been designed to create a database of these images from which it can identify and find copies spread online, humans are required to step in and make a decision on ones that haven’t been previously identified.

To that end, Google has now revealed it is releasing a free AI tool for NGOs and its industry partners to assist human moderators and ease some of the burden.

The system works by Google using its advanced neural networks to go through all of the flagged images and put the ones it believes are the highest priority for review up at the top of the moderator’s list. It will then add these images to the ever-growing database of child sexual abuse imagery for the AI and other tools, such as PhotoDNA, to filter them out automatically in the future.

During testing, Google claimed that the system helped a reviewer find and take action on 700pc more content over a particular period.

Trying to make the internet a safer place

One of the organisations that Google partnered with to develop the tool is the Internet Watch Foundation, which is dedicated to removing child sexual abuse images online.

Speaking of the new tool, the foundation’s CEO, Susie Hargreaves, said: “By sharing this new technology, the identification of images could be speeded up, which in turn could make the internet a safer place for both survivors and users.”

Expanding upon this, deputy CEO Fred Langford told The Verge he was surprised at the pace of technological change that led to this new tool.

“A few years ago, I would have said that sort of classifier was five, six years away,” he said.

“But now I think we’re only one or two years away from creating something that is fully automated in some cases.”

Colm Gorey was a senior journalist with Silicon Republic