Skip to yearly menu bar Skip to main content


Poster

Recurrent neural networks: vanishing and exploding gradients are not the end of the story

Nicolas Zucchet · Antonio Orvieto

West Ballroom A-D #6906
[ ]
Thu 12 Dec 11 a.m. PST — 2 p.m. PST

Abstract:

Recurrent neural networks (RNNs) notoriously struggle to learn long-term memories, primarily due to vanishing and exploding gradients. The recent success of state-space models (SSMs), a subclass of RNNs, to overcome such difficulties challenges our theoretical understanding. In this paper, we delve into the optimization challenges of RNNs and discover that, as the memory of a network increases, changes in its parameters result in increasingly large output variations, making gradient-based learning highly sensitive, even without exploding gradients. Our analysis further reveals the importance of the element-wise recurrence design pattern combined with careful parametrizations in mitigating this effect. This feature is present in SSMs, as well as in other architectures, such as LSTMs. Overall, our insights provide a new explanation for some of the difficulties in gradient-based learning of RNNs and why some architectures perform better than others.

Live content is unavailable. Log in and register to view live content