New chip technology could enable ‘brain-scale’ AI

25 Aug 2021

The Cerebras WSE-2 chip and AI processor. Image: Elizaveta Elesina/Cerebras

Cerebras said its new chip technology will combine four solutions to support neural networks 100 times larger than previous AI systems.

The human brain is incredibly sophisticated. Most brains weigh slightly more than 1kg but contain as many as 100trn synapses. In comparison, the most advanced AI clusters to date support 1trn parameters (the machine equivalent of a synapse) and require huge amounts of space and megawatts of power to operate.

Cerebras Systems now claims to have outpaced these prior AI models and created a technology to parallel mother nature. The start-up announced yesterday (24 August) that its new portfolio of technologies, when combined, would support an AI model with 120trn parameters.

Silicon Valley-based semiconductor company Cerebras was named on FastCompany’s list of most innovative AI companies in 2021 for enabling work in a range of areas, including studying Covid-19 therapeutics, black holes and nuclear fusion.

While other AI systems rely on graphics processor units (GPUs), the Cerebras WSE-2 chip is much more specialised and aims to deliver performance on a much larger scale. The company is using this to power a new chip cluster that it says can “unlock brain-scale neural networks”.

“Today, Cerebras moved the industry forward by increasing the size of the largest networks possible by 100 times,” said Andrew Feldman, CEO and co-founder of Cerebras.

“Larger networks, such as GPT-3, have already transformed the natural language processing (NLP) landscape, making possible what was previously unimaginable. The industry is moving past 1trn parameter models, and we are extending that boundary by two orders of magnitude, enabling brain-scale neural networks with 120trn parameters.”

Cerebras is using four different innovations to bypass the limitations of previous AI systems. This includes a new software architecture, a memory extension technology, a interconnect fabric technology and a dynamic sparsity harvesting technology.

“The last several years have shown us that, for NLP models, insights scale directly with parameters – the more parameters, the better the results,” said Rick Stevens, associate director of Argonne National Laboratory.

“Cerebras’ inventions, which will provide a 100 times increase in parameter capacity, may have the potential to transform the industry. For the first time we will be able to explore brain-sized models, opening up vast new avenues of research and insight.”

Cerebras said that the new CS-2 accelerator system would be no bigger than a small fridge and will rely on clever solutions to previously intractable problems.

The Cerebras Weight Streaming technology, for example, will allow users to disaggregate compute parameter storage, meaning researchers can scale size and speed independently. The company said this will also remove latency and bandwidth issues that are associated with large clusters of small processors.

“One of the largest challenges of using large clusters to solve AI problems is the complexity and time required to set up, configure and then optimise them for a specific neural network,” said Karl Freund, founder and principal analyst at Cambrian AI.

“The Weight Streaming execution model is so elegant in its simplicity, and it allows for a much more fundamentally straightforward distribution of work across the CS-2 clusters’ incredible compute resources. With Weight Streaming, Cerebras is removing all the complexity we have to face today around building and efficiently using enormous clusters – moving the industry forward in what I think will be a transformational journey.”

Sam Cox was a journalist at Silicon Republic covering sci-tech news

editorial@siliconrepublic.com