Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Associative Memory & Hopfield Networks in 2023

Variable Memory: Beyond the Fixed Memory Assumption in Memory Modeling

Arjun Karuvally · Hava Siegelmann


Abstract:

Memory models play a pivotal role in elucidating the mechanisms through which biological and artificial neural networks store and retrieve information. Traditionally, these models assume that memories are pre-determined, fixed before inference, and stored within synaptic interactions. Yet, neural networks can also dynamically store memories available only during inference within their activity. This capacity to bind and manipulate information as variables enhances the generalization capabilities of neural networks. Our research introduces and explores the concept of "variable memories." This approach extends the conventional sequence memory models, enabling information binding directly in network activity. By adopting this novel memory perspective, we unveil the underlying computational processes in the learned weights of RNNs on simple algorithmic tasks -- a fundamental question in the mechanistic understanding of neural networks. Our results underscore the imperative to evolve memory models beyond the fixed memory assumption towards more dynamic and flexible memory systems to further our understanding of neural information processing.

Chat is not available.