Timezone: »
Recurrent neural networks (RNNs) are wide-spread machine learning tools for modeling sequential and time series data. They are notoriously hard to train because their loss gradients backpropagated in time tend to saturate or diverge during training. This is known as the exploding and vanishing gradient problem. Previous solutions to this issue either built on rather complicated, purpose-engineered architectures with gated memory buffers, or - more recently - imposed constraints that ensure convergence to a fixed point or restrict (the eigenspectrum of) the recurrence matrix. Such constraints, however, convey severe limitations on the expressivity of the RNN. Essential intrinsic dynamics such as multistability or chaos are disabled. This is inherently at disaccord with the chaotic nature of many, if not most, time series encountered in nature and society. It is particularly problematic in scientific applications where one aims to reconstruct the underlying dynamical system. Here we offer a comprehensive theoretical treatment of this problem by relating the loss gradients during RNN training to the Lyapunov spectrum of RNN-generated orbits. We mathematically prove that RNNs producing stable equilibrium or cyclic behavior have bounded gradients, whereas the gradients of RNNs with chaotic dynamics always diverge. Based on these analyses and insights we suggest ways of how to optimize the training process on chaotic data according to the system's Lyapunov spectrum, regardless of the employed RNN architecture.
Author Information
Jonas Mikhaeil (Columbia University)
Zahra Monfared (Heidelberg University)
Daniel Durstewitz (CIMH Heidelberg University)
More from the Same Authors
-
2022 Panel: Panel 5B-4: Predictive Querying for… & On the difficulty… »
Alex Boyd · Jonas Mikhaeil -
2017 Poster: Sparse convolutional coding for neuronal assembly detection »
Sven Peter · Elke Kirschbaum · Martin Both · Lee Campbell · Brandon Harvey · Conor Heins · Daniel Durstewitz · Ferran Diego · Fred Hamprecht