Skip to yearly menu bar Skip to main content


Poster

Fast Convergence of Natural Gradient Descent for Over-Parameterized Neural Networks

Guodong Zhang · James Martens · Roger Grosse

East Exhibition Hall B + C #198

Keywords: [ Optimization -> Non-Convex Optimization; Theory ] [ Learning Theory ] [ Deep Learning ] [ Optimization for Deep Networks ]


Abstract:

Natural gradient descent has proven very effective at mitigating the catastrophic effects of pathological curvature in the objective function, but little is known theoretically about its convergence properties, especially for \emph{non-linear} networks. In this work, we analyze for the first time the speed of convergence to global optimum for natural gradient descent on non-linear neural networks with the squared error loss. We identify two conditions which guarantee the global convergence: (1) the Jacobian matrix (of network's output for all training cases w.r.t the parameters) is full row rank and (2) the Jacobian matrix is stable for small perturbations around the initialization. For two-layer ReLU neural networks (i.e. with one hidden layer), we prove that these two conditions do hold throughout the training under the assumptions that the inputs do not degenerate and the network is over-parameterized. We further extend our analysis to more general loss function with similar convergence property. Lastly, we show that K-FAC, an approximate natural gradient descent method, also converges to global minima under the same assumptions.

Live content is unavailable. Log in and register to view live content