Skip to yearly menu bar Skip to main content

Workshop: Symmetry and Geometry in Neural Representations

Grokking in recurrent networks with attractive and oscillatory dynamics

Keith Murray


Generalization is perhaps the most salient property of biological intelligence. In the context of artificial neural networks (ANNs), generalization has been studied through investigating the recently-discovered phenomenon of "grokking" whereby small transformers generalize on modular arithmetic tasks. We extend this line of work to continuous time recurrent neural networks (CT-RNNs) to investigate generalization in neural systems. Inspired by the card game SET, we reformulated previous modular arithmetic tasks as a binary classification task to elicit interpretable CT-RNN dynamics. We found that CT-RNNs learned one of two dynamical mechanisms characterized by either attractive or oscillatory dynamics. Notably, both of these mechanisms displayed strong parallels to deterministic finite automata (DFA). In our grokking experiments, we found that attractive dynamics generalize more frequently in training regimes with few withheld data points while oscillatory dynamics generalize more frequently in training regimes with many withheld data points.

Chat is not available.