Researchers believe artificial brains may also need to ‘sleep’

8 Jun 2020

Image: © yingyaipumi/Stock.adobe.com

Attempts to build neural networks similar in capacity to the human brain have found that periods of rest help quell instability.

Researchers’ efforts to build artificial brains using powerful neural networks have discovered a potential mirroring of humanity in their systems. A team from the Los Alamos National Laboratory in the US, led by Yijing Watkins, found its network simulations became unstable after continuous periods of unsupervised learning.

In a study on its findings, the team said it was only when the networks were exposed to states analogous to the waves that human brains experience during sleep that stability was restored.

“It was as though we were giving the neural networks the equivalent of a good night’s rest,” Watkins said.

“We were fascinated by the prospect of training a neuromorphic processor in a manner analogous to how humans and other biological systems learn from their environment during childhood development.”

The team initially struggled with stabilising simulated neural networks that were undergoing dictionary training, which involves classifying objects without having prior examples to compare them to.

Co-author of the study Garrett Kenyon said that the appearance of unstable systems only arises when attempting to create biologically realistic neuromorphic processors.

Putting AI to sleep

“The vast majority of machine learning, deep learning and AI researchers never encounter this issue because, in the very artificial systems they study, they have the luxury of performing global mathematical operations that have the effect of regulating the overall dynamical gain of the system,” he said.

The decision to put the neural networks into a state of sleep was considered a last resort, the team said. They experimented with various types of noise, roughly comparable to the static you might encounter between stations while tuning a radio.

The best results came when using waves of Gaussian noise, which include a wide range of frequencies and amplitudes. It’s believed that this noise mimics the input received by biological neurons during slow-wave sleep.

The results suggest that slow-wave sleep may act, in part, to ensure that cortical neurons maintain their stability and do not hallucinate. The team is now looking to implement its algorithm into Intel’s Loihi neuromorphic chip.

Allowing Loihi to sleep every so often will allow it to stably process data from a silicon retina camera in real time. If confirmed, the team said, we may expect the same to be true for androids and other intelligent machines in the future.

Watkins will present the team’s findings at an upcoming Women in Computer Vision virtual conference.

Colm Gorey was a senior journalist with Silicon Republic

editorial@siliconrepublic.com