close
Computer

Artificial Neural Networks Learn More Effectively After a Period of Complete Inactivity

Humans require 7 to 13 hours of sleep every night, depending on their age. Numerous things take place during this period, including changes in hormone levels, the body relaxing, and variations in heart rate, respiration, and metabolism. Not so much in the brain.

“The brain is very busy when we sleep, repeating what we have learned during the day,” said Maxim Bazhenov, PhD, professor of medicine and a sleep researcher at University of California San Diego School of Medicine. “Sleep helps reorganize memories and presents them in the most efficient way.”

Bazhenov and colleagues have described how sleep strengthens rational memory, the capacity to retain arbitrary or indirect relationships between objects, persons, or events, and guards against forgetting past memories in earlier published work.

The architecture of the human brain is used by artificial neural networks to enhance a wide range of technologies and systems, from basic science and medicine to finance and social media. They have excelled in several areas, such as computing speed, yet they fall short in a crucial area: Catastrophic forgetting is a phenomena that occurs when artificial neural networks learn sequentially and fresh input overwrites prior knowledge.

“In contrast, the human brain learns continuously and incorporates new data into existing knowledge,” said Bazhenov, “and it typically learns best when new training is interleaved with periods of sleep for memory consolidation.”

Senior author Bazhenov and colleagues describe how biological models may help reduce the risk of catastrophic forgetting in artificial neural networks, increasing their usefulness across a spectrum of research interests. Their article will appear in the PLOS Computational Biology journal on November 18, 2022.

The researchers employed spiking neural networks, which artificially imitate natural neural systems by transmitting information as discrete events (spikes) at specific times rather than continuously.

When we learn new information, neurons fire in specific order and this increases synapses between them. During sleep, the spiking patterns learned during our awake state are repeated spontaneously. It’s called reactivation or replay.

Maxim Bazhenov

They discovered that catastrophic forgetting was reduced when the spiking networks were trained on a new task but with sporadic off-line intervals that mirrored sleep. The networks may replay previous memories while “sleeping,” just like the human brain, according to the study’s authors, without explicitly requiring prior training data.

In the human brain, patterns of synaptic weight the intensity or amplitude of a link between two neurons represent memories.

“When we learn new information,” said Bazhenov, “neurons fire in specific order and this increases synapses between them. During sleep, the spiking patterns learned during our awake state are repeated spontaneously. It’s called reactivation or replay.”

While we sleep, synaptic plasticity, the ability to change or mold, is still present. It can improve the patterns of synaptic weight that represent memories, assisting in preventing forgetfulness or facilitating the transfer of knowledge from one task to another.

This method was used to prevent catastrophic forgetting in artificial neural networks, as discovered by Bazhenov and coworkers.

“It meant that these networks could learn continuously, like humans or animals. Understanding how the human brain processes information during sleep can help to augment memory in human subjects. Augmenting sleep rhythms can lead to better memory.”

“In other projects, we use computer models to develop optimal strategies to apply stimulation during sleep, such as auditory tones, that enhance sleep rhythms and improve learning. This may be particularly important when memory is non-optimal, such as when memory declines in aging or in some conditions like Alzheimer’s disease.”

Co-authors include: Ryan Golden and Jean Erik Delanois, both at UC San Diego; and Pavel Sanda, Institute of Computer Science of the Czech Academy of Sciences.

Topic : Article