AI vision models have improved dramatically over the past decade. Yet these gains have led to neural networks which, though effective, don’t share many characteristics with human vision. For example, CNNs are often better at noticing texture, while humans respond more strongly to shapes.
A paper recently published in Nature Human Behaviour has partially addressed that gap. It describes a novel All-Topographic Neural Network (All-TNN) that, when trained on natural images, developed an organized, specialized structure more like human vision. The All-TNN better mimicked human spatial biases, like expecting to see an airplane closer to the top of an image than the bottom, and operated on a significantly lower energy budget than other neural networks used for machine vision.
“One of the things you notice when you look at the way knowledge is ordered in the brain, is that it’s fundamentally different to how it is ordered in deep neural networks, such as convolutional neural nets,” said Tim C. Kietzmann, full professor at the Institute of Cognitive Science in Osnabrück, Germany and co-supervisor of the paper.
All-TNN networks learn human-like spatial biases
Most machine vision systems used today, including those found in apps like Google Photos and Snapchat, use some form of convolutional neural network (CNN). CNNs replicate identical feature detectors across many spatial locations (which is known as “weight sharing”). The result is a network that, when mapped, looks like a tightly repeatingfractal pattern.
The structure of an All-TNN network is much different. It instead appears smooth, with related neurons organized into clusters but never replicated. Images that map the spatial relationships in the All-TNN network look like the topographic view of a hilly region, or a group of microorganisms viewed under a microscope.
This visual difference is more than just a comparison of pretty pictures. Kietzmann said the weight sharing used by CNNs is a fundamental deviation from biological brains. “The brain can’t, when it learns something in one location, copy that knowledge over to other locations,” he said. “While in a CNN, you can. It’s an engineering hack to be a bit more efficient at learning.”
The All-TNN network avoided that characteristic through a fundamentally different architecture and training approach.
Instead of weight sharing, the researchers gave each spatial location in the network its own set of learnable parameters. Then, to prevent this from creating chaotic, unorganized features, they added a “smoothness constraint” in training that encouraged neighboring neurons to learn similar (yet never identical) features.
To test whether that translated to machine vision with more human-like behavior, the researchers asked 30 human participants to identify objects flashed briefly in different screen locations. While the ALL-TNN still wasn’t a perfect analog for human vision, it proved three times more strongly…
Read full article: Topographic Neural Networks Help AI See Like a Human

The post “Topographic Neural Networks Help AI See Like a Human” by Matthew S. Smith was published on 07/08/2025 by spectrum.ieee.org
Leave a Reply