A new artificial intelligence system can learn and recall information in a way that closely resembles human memory, addressing a fundamental in AI research. The system, developed by researchers at the University of Michigan, stores patterns and reconstructs them from corrupted versions, much like how the brain recognizes a familiar object in poor lighting. This online auto-associative memory system supports sequential learning and provides formal guarantees for robust retrieval, overcoming limitations of previous models like Hopfield networks that lack such assurances and cannot learn patterns one after another as humans do.
The key finding is that this memory system successfully reconstructs patterns from corrupted inputs through a novel architecture combining several components. It uses a threshold-linear network as its latent space dynamics, which supports multiple attractors—stable points that represent stored memories. During learning, when a new pattern is presented, a controller triggers a transition to a new attractor, and the encoder and decoder are updated to associate that pattern with the attractor. During inference, noisy patterns are mapped to target states near the correct attractor, and the controller drives the system to recover the uncorrupted pattern. The researchers demonstrated in simulations that the system accurately reconstructs patterns, such as digits from the MNIST dataset, from noisy inputs.
Ology involves a chain-structured threshold-linear network with specific parameters that ensure competitive and nondegenerate dynamics, allowing each cell to contain at most one equilibrium. The system includes an encoder that maps input patterns to latent states, a decoder that maps latent states back to patterns, and a controller that modulates the dynamics for memory formation and retrieval. The controller uses a feedback mechanism based on linear quadratic regulator design during inference and injects local inhibition and correlated noise during learning to induce attractor switches. Learning rules update the encoder and decoder weights using minimum-norm solutions to associate new patterns with attractors without disrupting previously learned memories, as detailed in Algorithm 1 of the paper.
From simulations show the system's effectiveness in both learning and inference phases. Figure 4A illustrates trajectories during learning and inference, with background colors indicating the true region of attraction computed through sampling. Figure 4B displays TLN firing rates over time, corresponding to different attractors, and Figure 4C shows successful reconstruction of a noisy input pattern. Robustness analysis, comparing two s—semidefinite programming and linear programming—for computing noise bounds, reveals that the linear programming yields larger tolerable noise levels, as shown in Figure 4D. Empirical simulations in Figure 4E indicate that the system begins to fail when noise exceeds approximately twice the bound found by the linear programming , demonstrating its reliability within certified limits.
Of this research are significant for developing AI systems that mimic biological learning processes, with potential applications in robotics, data analysis, and cognitive modeling. By providing formal guarantees of robust memory retrieval, the system offers a principled approach to pattern completion in noisy environments, similar to human cognitive functions. The use of dynamical systems for control and learning enhances biological plausibility, as neurons perform computations via dynamics, making it relevant for neuroscience-inspired AI. This could lead to more adaptive and reliable AI in real-world scenarios where data is often incomplete or corrupted.
Limitations of the current work include the specific parameter constraints required for the chain-structured threshold-linear network, such as ε in [1/2, 1), which may restrict generalizability to other network structures. The robustness guarantees rely on theoretical analyses like region of attraction certification, which, while validated in simulations, may not cover all practical noise scenarios. Future work will explore generalization to hetero-associative memory and motor tasks, as noted in the conclusion, indicating that the system's applicability to more complex associations remains an open area for research.
Original Source
Read the complete research paper
About the Author
Guilherme A.
Former dentist (MD) from Brazil, 41 years old, husband, and AI enthusiast. In 2020, he transitioned from a decade-long career in dentistry to pursue his passion for technology, entrepreneurship, and helping others grow.
Connect on LinkedIn