AIResearch AIResearch
Back to articles
Science

AI Learns Like the Human Brain

New training method matches traditional AI performance while using less memory and avoiding vanishing gradients—making it ideal for self-driving cars and mobile devices.

AI Research
November 05, 2025
3 min read
AI Learns Like the Human Brain

Artificial intelligence systems could soon become more efficient and biologically plausible thanks to a new training approach that mimics how the human brain learns. Researchers have developed a layer-by-layer training method that achieves comparable accuracy to traditional approaches while using significantly less memory and avoiding common training problems. This breakthrough could enable more powerful AI on resource-constrained devices like smartphones and autonomous vehicles.

The key finding reveals that even when AI networks are trained using conventional methods, they naturally learn in a layer-by-layer progression from shallow to deep layers. This observation led researchers to develop Greedy-DIB (Deterministic Information Bottleneck), a novel training approach that explicitly trains each layer separately while maintaining connections to the final output. The method achieves performance comparable to standard backpropagation while being more computationally efficient and biologically plausible.

Researchers systematically analyzed popular convolutional neural networks (CNNs) like VGG and ResNet through an information-theoretic lens. They designed cross-mutual information matrices to visualize how representations evolve during training, revealing that networks converge progressively from shallower to deeper layers. This layer-by-layer progression adheres to the Markov principle, where each layer builds upon the previous one. Building on these observations, the team developed Greedy-DIB, which trains each layer with an auxiliary classifier while minimizing redundant information through entropy regularization.

The results show Greedy-DIB outperforms existing layer-wise training methods and achieves comparable performance to standard stochastic gradient descent (SGD). On CIFAR-100 with ResNet-18, the method achieved 72.03% accuracy compared to SGD's 75.61%, while using less memory and avoiding vanishing gradient problems. More impressively, in traffic sign recognition tasks, Greedy-DIB actually surpassed traditional methods, achieving 98.73% classification accuracy and 89.23% mean intersection over union on the Chinese Traffic Sign Recognition Database, compared to SGD's 98.23% and 86.31% respectively.

This research matters because it addresses critical limitations of current AI training methods. Traditional backpropagation requires storing all intermediate outputs during training, leading to high memory usage that prevents model reuse in resource-constrained settings. It also suffers from vanishing and exploding gradients that can stall learning. The new approach eliminates these problems while maintaining performance, making it ideal for applications like autonomous vehicles where real-time processing and reliability are crucial. The method's biological plausibility also opens doors to better understanding how brains learn.

The study acknowledges limitations, including that the approach hasn't been tested on more challenging datasets like Tsinghua-Tencent 100K, where traffic signs occupy less than 1% of images. Adapting to such datasets would require more advanced architectures and strategies. Additionally, while the method shows promise, it hasn't yet been extended to state-of-the-art object detection frameworks like YOLO, though this is planned for future work.

Original Source

Read the complete research paper

View on arXiv

About the Author

Guilherme A.

Guilherme A.

Former dentist (MD) from Brazil, 41 years old, husband, and AI enthusiast. In 2020, he transitioned from a decade-long career in dentistry to pursue his passion for technology, entrepreneurship, and helping others grow.

Connect on LinkedIn