Brain-mimicking neural networks promise to perform more quickly, accurately, and impartially than humans on a wide range of problems—including analyzing mutations associated with cancer to deciding who receives a loan. However, these AI systems work in notoriously mysterious ways, raising concerns over their trustworthiness. Now a new study has found a way to reveal when neural networks might get confused, potentially shedding light on what they may be doing when they make mistakes.
As neural networks run computations on sets of data, such as collections of images, they focus on details, such as potential facial features, within each sample making up a set. The strings of numbers encoding these details are used to calculate the probability that a sample belongs to a specific category—whether, in this case, the image is of a person’s face.
“I’m still amazed at how helpful this technique is.”
—David Gleich, Purdue University
However, the way in which neural networks learn what details help them find solutions is often a mystery. This “black box“ nature of neural networks makes it difficult to know if a neural network’s answers are right or wrong.
“When a person solves a problem, you can ask them how they solved it and hopefully get an answer you can understand,” says study senior author David Gleich, a professor of computer science at Purdue University in West Lafayette, Ind. “Neural networks don’t work like that.”
In the new study, instead of attempting to follow the decision-making process for any one sample on which neural networks are tested, Gleich and his colleagues sought to visualize the relationships these AI systems detect in all samples in an entire database.
“I’m still amazed at how helpful this technique is to help us understand what a neural network might be doing to make a prediction,” Gleich says.
The scientists experimented with a neural network trained to analyze roughly 1.3 million images in the ImageNet database. They developed a method of splitting and overlapping classifications to identify images that had a high probability of belonging to more than one classification.
The researchers then drew from the mathematical field of topology—which studies properties of geometric objects—to map the relationships the neural network inferred between each image and each classification. Analytical techniques from topology can help scientists identify similarities between sets of data despite any seeming differences. “Tools based on topological data analysis have in the past been used to analyze gene expression levels and identify specific subpopulations in breast cancer, among other really interesting insights,” Gleich says.
“Our tool allows us to build something like a map that makes it possible to zoom in on regions of data.”
—David Gleich, Purdue University
In the maps this new study generated, each group of images a network thinks are related is represented by a single dot. Each dot is…
Read full article: Error-Detection Tool Makes AI Mistakes Easy to Spot
The post “Error-Detection Tool Makes AI Mistakes Easy to Spot” by Charles Q. Choi was published on 12/03/2023 by spectrum.ieee.org