close
Computer Sciences

“Raw” Data Displays AI Mirror Signals What the Brain Does to Hear and Learn

According to recent research from the University of California, Berkeley, artificial intelligence (AI) systems can analyze data in a manner that is very similar to how the brain processes speech. This result may help explain the “black box” of how AI systems function, according to scientists.

Scientists from the Berkeley Speech and Computation Lab monitored the brain waves of volunteers as they listened to the single-syllable “bah” using a system of electrodes attached to their heads. The signals produced by an AI system that had been taught to understand English were then matched to the brain activity.

“The shapes are remarkably similar,” said Gasper Begus, assistant professor of linguistics at UC Berkeley and lead author on the study published recently in the journal Scientific Reports. “That tells you similar things get encoded, that processing is similar.”

A side-by-side comparison graph of the two signals shows that similarity strikingly.

“There are no tweaks to the data,” Begus added. “This is raw.”

AI systems have recently advanced by leaps and bounds. These tools have been predicted to upend societal segments and transform how millions of people work ever since ChatGPT spread globally last year. However, despite these remarkable developments, scientists still have limited knowledge of the precise interactions between the input and output of the tools they developed.

The benchmark used to assess the biases and intelligence of an AI system is a question and answer in ChatGPT. But what transpires in between has mostly remained a mystery. As these systems grow more and more integrated into daily life in industries ranging from health care to education, understanding how and why they deliver the information they do and how they learn becomes crucial.

I’m really interested as a scientist in the interpretability of these models. They are so powerful. Everyone is talking about them. And everyone is using them. But much less is being done to try to understand them.

Professor Gasper Begus

Begus and his co-authors, Alan Zhou of Johns Hopkins University and T. Christina Zhao of the University of Washington, are among a cadre of scientists working to crack open that box.

To do so, Begus turned to his training in linguistics.

“When we listen to spoken words,” Begus said, “the sound enters our ears and is converted into electrical signals. Those signals then travel through the brainstem and to the outer parts of our brain. With the electrode experiment, researchers traced that path in response to 3,000 repetitions of a single sound and found that the brain waves for speech closely followed the actual sounds of language.”

The same recording of the “bah” sound was relayed by the researchers through an AI system that could understand sound, an unsupervised neural network. The researchers monitored the coinciding waves and recorded them as they happened using a method they devised in the Berkeley Speech and Computation Lab.

“Previous research required extra steps to compare waves from the brain and machines. Studying the waves in their raw form will help researchers understand and improve how these systems learn and increasingly come to mirror human cognition,” Begus said.

“I’m really interested as a scientist in the interpretability of these models,” Begus said. “They are so powerful. Everyone is talking about them. And everyone is using them. But much less is being done to try to understand them.”

Begus thinks that what occurs between input and output need not be a mystery. An essential yardstick in the quest to create ever-more powerful systems is an understanding of how those signals compare to human brain activity. Likewise is being aware of what is happening internally.

For instance, knowing that could help create barriers for ever-more potent AI models. Additionally, it might clarify how bias and errors are ingrained in learning processes.

Begus said he and his colleagues are collaborating with other researchers using brain imaging techniques to measure how these signals might compare. Additionally, they are looking into how other languages, like Mandarin, are processed differently in the brain and what it can mean for knowledge.

“Many models are trained on visual cues, like colors or written text both of which have thousands of variations at the granular level. Language, however, opens the door for a more solid understanding,” Begus said.

The English language, for example, has just a few dozen sounds.

“If you want to understand these models, you have to start with simple things. And speech is way easier to understand,” Begus said. “I am very hopeful that speech is the thing that will help us understand how these models are learning.”

Building mathematical models that closely mimic people is one of the main objectives of cognitive research. The recently discovered parallels between brain and artificial intelligence waves serve as a gauge of how near scientists are to achieving that objective.

“I’m not saying that we need to build things like humans,” Begus said. “I’m not saying that we don’t. But understanding how different architectures are similar or different from humans is important.”

Topic : Article