Timezone: »
Training a neural network with the gradient descent algorithm gives rise to a discrete-time nonlinear dynamical system. Consequently, behaviors that are typically observed in these systems emerge during training, such as convergence to an orbit but not to a fixed point or dependence of convergence on the initialization. Step size of the algorithm plays a critical role in these behaviors: it determines the subset of the local optima that the algorithm can converge to, and it specifies the magnitude of the oscillations if the algorithm converges to an orbit. To elucidate the effects of the step size on training of neural networks, we study the gradient descent algorithm as a discrete-time dynamical system, and by analyzing the Lyapunov stability of different solutions, we show the relationship between the step size of the algorithm and the solutions that can be obtained with this algorithm. The results provide an explanation for several phenomena observed in practice, including the deterioration in the training error with increased depth, the hardness of estimating linear mappings with large singular values, and the distinct performance of deep residual networks.
Author Information
Kamil Nar (University of California, Berkeley)
Shankar Sastry (Department of EECS, UC Berkeley)
Related Events (a corresponding poster, oral, or spotlight)
-
2018 Spotlight: Step Size Matters in Deep Learning »
Tue. Dec 4th 09:45 -- 09:50 PM Room Room 220 E
More from the Same Authors
-
2012 Poster: CPRL -- An Extension of Compressive Sensing to the Phase Retrieval Problem »
Henrik Ohlsson · Allen Y Yang · Roy Dong · Shankar Sastry