Skip to yearly menu bar Skip to main content

Workshop: Associative Memory & Hopfield Networks in 2023

Biologically-inspired adaptive learning in the Hopfield-network based self-optimization model

Aisha Belhadi


A significant portion of the recent growth of artificial intelligence can be attributed to the development of deep learning systems, going hand in hand with the accumulation of Big Data. It therefore makes sense that most often, these systems are based on supervised or reinforcement learning using massive datasets, and reward or error-based rules for training. Though these techniques have achieved impressive levels of accuracy and functionality, rivaling human cognition in some areas, they seem to work very differently from living systems that can learn, make associations and adapt with very sparse data, efficient use of energy and comparatively minimal training iterations. In the world of machine learning, Hopfield networks, with an architecture that allows for unsupervised learning, an associative memory, scaling, and modularity, offer an alternative way of looking at artificial intelligence, that has the potential to hew closer to biological forms of learning. This work distills some mechanisms of adaptation in biological systems, including metaplasticity, homeostasis, and inhibition, and proposes ways in which these features can be incorporated into Hopfield networks through adjustments to the learning rate, modularity, and activation rule. The overall aim is to develop deep learning tools that can recapitulate the advantages of biological systems, and to have a computational method that can plausibly model a wide range of living and adaptive systems of varying levels of complexity.

Chat is not available.