Timezone: »

Natural Neural Networks
Guillaume Desjardins · Karen Simonyan · Razvan Pascanu · koray kavukcuoglu

Thu Dec 10 08:00 AM -- 12:00 PM (PST) @ 210 C #9

We introduce Natural Neural Networks, a novel family of algorithms that speed up convergence by adapting their internal representation during training to improve conditioning of the Fisher matrix. In particular, we show a specific example that employs a simple and efficient reparametrization of the neural network weights by implicitly whitening the representation obtained at each layer, while preserving the feed-forward computation of the network. Such networks can be trained efficiently via the proposed Projected Natural Gradient Descent algorithm (PRONG), which amortizes the cost of these reparametrizations over many parameter updates and is closely related to the Mirror Descent online learning algorithm. We highlight the benefits of our method on both unsupervised and supervised learning tasks, and showcase its scalability by training on the large-scale ImageNet Challenge dataset.

Author Information

Guillaume Desjardins (Google DeepMind)
Karen Simonyan (Google DeepMind)
Razvan Pascanu (Google DeepMind)
koray kavukcuoglu (Google DeepMind)

More from the Same Authors