Skip to yearly menu bar Skip to main content

Workshop: Associative Memory & Hopfield Networks in 2023

Sequential Learning and Retrieval in a Sparse Distributed Memory: The K-winner Modern Hopfield Network

Shaunak Bhandarkar · James McClelland


Many autoassociative memory models rely on a localist framework, using a neuron or slot for each memory. However, neuroscience research suggests that memories depend on sparse, distributed representations over neurons with sparse connectivity. Accordingly, we extend a canonical localist memory model—the modern Hopfield network (MHN)—to a distributed variant called the K-winner modern Hopfield network, equating the number of synaptic parameters (weights) in the localist and K-winner variants. We study both models' retrieval capabilities after exposure to a long sequence of (random as well as structured) patterns, updating the parameters of the best-matching memory neurons as each new pattern is presented. We find that K-winner MHN's that compromise slightly on retrieval accuracy of the most recent memories exhibit superior retention of older memories.

Chat is not available.