Skip to yearly menu bar Skip to main content

Workshop: Associative Memory & Hopfield Networks in 2023

Enhanced cue associated memory in temporally consistent recurrent neural networks

Udith Haputhanthri · Liam Storan · Adam Shai · Surya Ganguli · Mark Schnitzer · Hidenori Tanaka · Fatih Dinc


Recurrent connections are instrumental in creating memories and performing time-delayed computations. During their training, networks often explore distinct topological regions across the parameter space, each with unique attractor structures that serve specific computational purposes. However, the mechanisms that facilitate these topological transitions, so called bifurcations, toward an optimal parameter space configuration remain poorly understood. In this workshop paper, we investigated the learning process of recurrent neural networks in memory-assisted computation and developed a regularization strategy to encourage bifurcations that enhance memory formation capacity. To begin, we examined a delayed addition task that required the network to retain cue-associated memories for an extended duration. We observed two distinct phases during the learning of recurrent neural networks, separated by a bifurcation. In the initial \textit{search phase}, both train and test loss values remained stable as the network searched for beneficial bifurcations leading to optimal parameter configurations. In the subsequent \textit{rapid comprehension phase}, the loss values rapidly decreased, and the network quickly learned the task while preserving its topology but updating its geometry. During our analysis, we observed that the gradient direction, \textit{i.e.}, learning signal, was aligned with the optimal descent direction in the second but not the first phase. To aid learning in the search phase, we developed a temporal consistency regularization that incentivized a subset of neurons to have slow time dynamics, which subsequently decreased the duration of the search. Next, we tested the stability of the learned attractors with and without the temporal consistency regularization, via noise injection experiments, where we uncovered a more robust attractor subspace formation in the former. Finally, we enforced temporal consistency in a randomly initialized chaotic recurrent neural network to obtain several cue-associated fixed points in an unsupervised, online, and biologically plausible manner. Our results provide a deeper understanding of the role of bifurcations in enhancing associative memory by driving networks toward the desired attractor formation.

Chat is not available.