Google scientists for the past decade or so are understood to have been working on a computer project to simulate the entire human brain using 16,000 or so computers. And what has all this effort gotten Google? Well, the contraption can recognise a cat.
It is understood that each of the 16,000 computers sifted through the thumbnails of some 10m videos on YouTube.
The irony is cat videos are probably among the most popular genre of videos on YouTube.
World’s largest neural network?
The scientists are based at Google’s ultra-secret X Labs in California. According to their abstract: “We consider the problem of building high-level, class-specific feature detectors from only unlabelled data. For example, is it possible to learn a face detector using only unlabelled images?
“To answer this, we train a nine-layered locally connected sparse auto encoder with pooling and local contrast normalisation on a large dataset of images (the model has 1bn connections, the dataset has 10m 200 x 200 pixel images downloaded from the internet).
“We train this network using model parallelism and asynchronous SGD on a cluster with 1,000 machines (16,000 cores) for three days. Contrary to what appears to be a widely held intuition, our experimental results reveal that it is possible to train a face detector without having to label images as containing a face or not.
“Control experiments show that this feature detector is robust not only to translation but also to scaling and out-of-plane rotation. We also find that the same network is sensitive to other high-level concepts, such as cat faces and human bodies. Starting with these learned features, we trained our network to obtain 15.8pc accuracy in recognising 20,000 object categories from ImageNet, a leap of 70pc relative improvement over the previous state-of-the-art,” the scientists said.
Cat and computer image via Shutterstock