Even computers need a good night’s sleep: AI designed to mimic the neural networks in the brain works better with regular rest periods just like humans
- Researchers at Los Alamos National Laboratory investigated organic AI systems
- Their specific AI modeled on organic neural networks was growing unstable
- They discovered they could make it stable again by exposing it to sound waves that simulated the low frequency signals a brain experiences while sleeping
Artificial intelligence systems modeled after human neural networks may operate better when given regular rest periods that mimic the effects of sleep on humans.
That was the conclusion researchers at Los Alamos National Laboratory in New Mexico reached after studying the performance of an AI designed to replicate neural networks in the human brain.
A team of computer scientists led by Yijing Watkins designed an AI they hoped would be capable of teaching itself how to accurately classify different types of objects without access to any pre-existing database of classification system.
Computer scientists at Los Alamos National Laboratory in New Mexico designed a machine learning AI modeled after human neural networks and found that it was becoming unstable over time and needed regular rest periods to function properly
They used what’s called a spiking neural network, a system that mimics the way neurons in the brain fire at different times and in different intensities based on the kinds of stimulation they’re receiving.
The system was ideal for the test because, according to Watkins, it’s ‘analogous to how humans and other biological systems learn from their environment during childhood development.’
In preliminary testing, however, the team found their AI wasn’t actually wasn’t teaching itself but instead seemed to grow less and less stable the longer it ran.
After experimenting with a number of other fixes, the team settled on the concept of sleep as a ‘last ditch effort.’
Rather than just unplugging the machine, they devised a way to induce the AI to enter a kind of low energy state similar to the one the human brain enters when asleep.
‘It was as though we were giving the neural networks the equivalent of a good night’s rest,’ Watkins said in a statement to the Los Alamos Laboratory’s news blog.
To simulate sleep, the team exposed the AI to sound waves set to a specific frequency, called Gaussian noise, to mimic the neural signals a brain receives when it enters a ‘slow-wave’ sleep stage, the state just before full REM sleep.
Rather than simply unplug the computer, the team exposed the system to Gaussian noise, sound waves similar to radio static that replicated the same neural signals the human brain experiences while asleep
Surprisingly, after being exposed to several hours of Gaussian noise, the system restabilized and started functioning normally again.
Los Alamos Lab’s Garrett Kenyon believes the issue arose because they had focused on a design that specifically replicated real biological systems.
‘The issue of how to keep learning systems from becoming unstable really only arises when attempting to utilize biologically realistic, spiking neuromorphic processors or when trying to understand biology itself’ Kenyon said.
‘The vast majority of machine learning, deep learning, and AI researchers never encounter this issue because in the very artificial systems they study they have the luxury of performing global mathematical operations that have the effect of regulating the overall dynamical gain of the system.’
The next step for the team will be working on a way to integrate a ‘sleep’ system directly into neuromorphic computer chips, starting with Intel’s experimental Loihi chipset.
WHAT IS DEEP LEARNING?
Deep learning is a form of machine learning concerned with algorithms which have a wide range of applications.
It is a field which was inspired by the human brain and focuses on building artificial neural networks.
It was formed originally based on brain simulations and to allow learning algorithms to become better and easier to use.
Processing vast amounts of complex data then becomes much easier and allows researchers to trust algorithms to draw accurate conclusions based on the parameters the researchers have set.
Task-specific algorithms which exist are better for specific tasks and goals but deep-learning allows for a wider scope of data collection.