close
Technology

The Workings of AI are revealed by a New Way for Comparing Neural Networks

There are many ways to compare and evaluate neural networks, and it is an active area of research in the field of artificial intelligence. Some common methods include comparing the performance of the networks on a particular task or dataset, examining the structure of the networks and how they are trained, and analyzing the intermediate representations that the networks learn while processing data.

By understanding how different neural network architectures and training methods impact their performance, researchers can gain insights into the inner workings of artificial intelligence systems and improve their design and use.

A team at Los Alamos National Laboratory has developed a novel approach for comparing neural networks that looks inside the “black box” of artificial intelligence to help researchers understand neural network behavior. Neural networks recognize patterns in datasets and are used in applications such as virtual assistants, facial recognition systems, and self-driving cars.

“The artificial intelligence research community does not necessarily have a complete understanding of what neural networks are doing; they give us good results, but we don’t know how or why,” said Haydn Jones, a researcher in Los Alamos’ Advanced Research in Cyber Systems group. “Our new method compares neural networks more effectively, which is an important step toward better understanding the mathematics behind AI.”

We’re making it easier to understand how robust AI might work by discovering that robust neural networks are similar to one another. We may even be uncovering clues about how humans and other animals perceive.

Haydn Jones

Jones is the lead author of the paper “If You’ve Trained One, You’ve Trained Them All: Inter-Architecture Similarity Increases with Robustness,” which was presented at the Conference on Uncertainty in Artificial Intelligence last month. The paper is an important step toward characterizing the behavior of robust neural networks, in addition to studying network similarity.

High-performance neural networks are fragile. Self-driving cars, for example, use neural networks to detect signs. When conditions are ideal, they perform admirably. However, even minor imperfections, such as a sticker on a stop sign, can cause the neural network to misidentify the sign and never stop.

To improve neural networks, researchers are looking at ways to improve network robustness. One state-of-the-art approach involves “attacking” networks during their training process. Researchers intentionally introduce aberrations and train the AI to ignore them. This process is called adversarial training and essentially makes it harder to fool the networks.

New method for comparing neural networks exposes how artificial intelligence works

Jones, Los Alamos collaborators Jacob Springer and Garrett Kenyon, and Jones’ mentor Juston Moore applied their new network similarity metric to adversarially trained neural networks and discovered, surprisingly, that adversarial training causes neural networks in the computer vision domain to converge to very similar data representations, regardless of network architecture, as the magnitude of the attack increases.

“We discovered that when we train neural networks to be robust against adversarial attacks, they start to do the same things,” Jones explained.

There has been considerable effort in industry and academia to find the “right architecture” for neural networks, but the Los Alamos team’s findings show that the addition of adversarial training significantly narrows this search space. As a result, because adversarial training causes diverse architectures to converge to similar solutions, the AI research community may not need to spend as much time exploring new architectures.

“We’re making it easier to understand how robust AI might work by discovering that robust neural networks are similar to one another. We may even be uncovering clues about how humans and other animals perceive” Jones stated.

Topic : Article