Skip to yearly menu bar Skip to main content


Poster

A Layer-Wise Natural Gradient Optimizer for Training Deep Neural Networks

Xiaolei Liu · Shaoshuai Li · Kaixin Gao · Binfeng Wang

East Exhibit Hall A-C #2006
[ ]
Wed 11 Dec 11 a.m. PST — 2 p.m. PST

Abstract:

Second-order optimization algorithms, such as the Newton method and the natural gradient descent (NGD) method exhibit excellent convergence properties for training deep neural networks, but the high computational cost limits its practical application. In this paper, we focus on the NGD method and propose a novel layer-wised natural gradient descent (LNGD) method to further reduce computational costs and accelerate the training process. Specifically, based on the block diagonal approximation of the Fisher information matrix, we first propose the layer-wise sample method to compute each block matrix without performing a complete back-propagation. Then, each block matrix is approximated as a Kronecker product of two smaller matrices, one of which is a diagonal matrix, while keeping the traces equal before and after approximation. By these two steps, we provide a new approximation for the Fisher information matrix, which can effectively reduce the computational cost while preserving the main information of each block matrix. Moreover, we propose a new adaptive layer-wise learning rate to further accelerate training. Based on these new approaches, we propose the LNGD optimizer. The global convergence analysis of LNGD is established under some assumptions. Extensive experiments on image classification and machine translation tasks show that our method is quite competitive compared to the state-of-the-art methods.

Live content is unavailable. Log in and register to view live content