The possibility of putting a supercomputer in your pocket just became a lot more realistic thanks to a new chip.
Many of the major breakthroughs we’ve achieved in the field of artificial intelligence (AI) – such as facial recognition systems – couldn’t have been made without the help of powerful neural networks.
The tightly bound connections of processors are capable of taking huge amounts of data and making sense of it. However, what holds it back is its reliance on large hardware, which is largely unavailable in everyday circumstances.
While we can access the power of neural networks through our phones, it is only because the data generated is uploaded to a distant server and sent back again, saving your phone from the computational stress it would cause.
However, a team from MIT has managed to construct a special-purpose chip that not only speeds up the rate of computation in neural networks by a factor of seven, it also manages to reduce its power consumption by up to 95pc.
This massive leap in technology could now make it possible to shrink the computational power down to where it could be run natively on your smartphone, or even embedded within household appliances.
To achieve such a huge reduction in energy consumption, the research team’s lead, Avishek Biswas, said it was down to a functionality he refers to as the ‘dot product’.
Replicating the brain
“The general processor model is that there is a memory in some part of the chip, and there is a processor in another part of the chip, and you move the data back and forth between them when you do these computations,” he explained.
This constant back-and-forth movement of data, Biswas added, is the dominant consumer of energy.
“But the computation these algorithms do can be simplified to one specific operation, called the dot product. Our approach was, can we implement this dot-product functionality inside the memory so that you don’t need to transfer this data back and forth?”
Essentially, Biswas’ chip replicates the brain more faithfully than previous neural network chips.
Speaking of the breakthrough, IBM’s vice-president of artificial intelligence, Dario Gil, said: “The results show impressive specifications for the energy-efficient implementation of convolution operations with memory arrays.
“It certainly will open the possibility to employ more complex convolutional neural networks for image and video classifications in the internet of things in the future.”
Biswas and his thesis adviser, Anantha Chandrakasan, presented these new findings at this week’s International Solid-State Circuits Conference in San Francisco.