Unlocking State-Tracking in Linear RNNs Through Negative Eigenvalues
Riccardo Grazzi · Julien Siems · Jörg Franke · Arber Zela · Frank Hutter · Massimiliano Pontil
Keywords:
DeltaNet
Formal Languages
Linear RNN
state space
State Tracking
GLA
mamba
Linear Attention
2024 Oral
in
Workshop: Mathematics of Modern Machine Learning (M3L)
in
Workshop: Mathematics of Modern Machine Learning (M3L)
Abstract
Linear Recurrent Neural Networks (LRNNs) such as Mamba, RWKV, GLA, mLSTM, and DeltaNet have emerged as efficient alternatives to transformers in large language modeling, offering linear scaling with sequence length and improved training efficiency. However, LRNNs struggle with state-tracking which is important for e.g. code comprehension and tracking chess pieces across a board. Even parity, the simplest state-tracking task, which non-linear RNNs like LSTMs handle effectively, cannot be solved by current LRNNs. Recently, Sarrof et al. (2024) demonstrated that the failure of LRNNs to solve parity stems from restricting the eigenvalue range of their diagonal state-transition matrices to $[0, 1]$, and that incorporating negative eigenvalues can resolve this issue. We generalize this result to full matrix LRNNs. We prove that no finite-precision LRNN with state-transition matrices having only positive real eigenvalues can solve parity.Notably, we prove that LRNNs can learn any regular language when their state-transition matrices are products of identity plus vector outer product matrices with eigenvalues in the range $[-1, 1]$. Our empirical results confirm that extending the eigenvalue range of models like Mamba and DeltaNet to include negative values not only enables them to solve parity but consistently improves their performance on state-tracking tasks. Furthermore, pre-training LRNNs with an extended eigenvalue range for language modeling achieves comparable performance and stability while showing promise for coding tasks.
Video
Chat is not available.
Successful Page Load