Timezone: »
Parameter-specific adaptive learning rate methods are computationally efficient ways to reduce the ill-conditioning problems encountered when training large deep networks. Following recent work that strongly suggests that most of thecritical points encountered when training such networks are saddle points, we find how considering the presence of negative eigenvalues of the Hessian could help us design better suited adaptive learning rate schemes. We show that the popular Jacobi preconditioner has undesirable behavior in the presence of both positive and negative curvature, and present theoretical and empirical evidence that the so-called equilibration preconditioner is comparatively better suited to non-convex problems. We introduce a novel adaptive learning rate scheme, called ESGD, based on the equilibration preconditioner. Our experiments demonstrate that both schemes yield very similar step directions but that ESGD sometimes surpasses RMSProp in terms of convergence speed, always clearly improving over plain stochastic gradient descent.
Author Information
Yann Dauphin (Facebook AI Research)
Harm de Vries (ServiceNow Research)
Yoshua Bengio (U. Montreal)
More from the Same Authors
-
2019 : Opening Remarks »
Florian Strub · Harm de Vries · Abhishek Das · Stefan Lee · Erik Wijmans · Dor Arad Hudson · Alane Suhr -
2019 Workshop: Visually Grounded Interaction and Language »
Florian Strub · Abhishek Das · Erik Wijmans · Harm de Vries · Stefan Lee · Alane Suhr · Dor Arad Hudson -
2018 Workshop: Visually grounded interaction and language »
Florian Strub · Harm de Vries · Erik Wijmans · Samyak Datta · Ethan Perez · Mateusz Malinowski · Stefan Lee · Peter Anderson · Aaron Courville · Jeremie MARY · Dhruv Batra · Devi Parikh · Olivier Pietquin · Chiori HORI · Tim Marks · Anoop Cherian -
2017 Workshop: Visually grounded interaction and language »
Florian Strub · Harm de Vries · Abhishek Das · Satwik Kottur · Stefan Lee · Mateusz Malinowski · Olivier Pietquin · Devi Parikh · Dhruv Batra · Aaron Courville · Jeremie Mary -
2017 Poster: Modulating early visual processing by language »
Harm de Vries · Florian Strub · Jeremie Mary · Hugo Larochelle · Olivier Pietquin · Aaron Courville -
2017 Spotlight: Modulating early visual processing by language »
Harm de Vries · Florian Strub · Jeremie Mary · Hugo Larochelle · Olivier Pietquin · Aaron Courville -
2016 : Yoshua Bengio – Credit assignment: beyond backpropagation »
Yoshua Bengio -
2016 : Panel on "Explainable AI" (Yoshua Bengio, Alessio Lomuscio, Gary Marcus, Stephen Muggleton, Michael Witbrock) »
Yoshua Bengio · Alessio Lomuscio · Gary Marcus · Stephen H Muggleton · Michael Witbrock -
2016 Symposium: Deep Learning Symposium »
Yoshua Bengio · Yann LeCun · Navdeep Jaitly · Roger Grosse -
2016 Poster: Architectural Complexity Measures of Recurrent Neural Networks »
Saizheng Zhang · Yuhuai Wu · Tong Che · Zhouhan Lin · Roland Memisevic · Russ Salakhutdinov · Yoshua Bengio -
2016 Poster: Professor Forcing: A New Algorithm for Training Recurrent Networks »
Alex M Lamb · Anirudh Goyal · Ying Zhang · Saizheng Zhang · Aaron Courville · Yoshua Bengio -
2016 Poster: On Multiplicative Integration with Recurrent Neural Networks »
Yuhuai Wu · Saizheng Zhang · Ying Zhang · Yoshua Bengio · Russ Salakhutdinov -
2016 Poster: Binarized Neural Networks »
Itay Hubara · Matthieu Courbariaux · Daniel Soudry · Ran El-Yaniv · Yoshua Bengio -
2015 : RL for DL »
Yoshua Bengio -
2015 : Learning Representations for Unsupervised and Transfer Learning »
Yoshua Bengio -
2015 Poster: Attention-Based Models for Speech Recognition »
Jan K Chorowski · Dzmitry Bahdanau · Dmitriy Serdyuk · Kyunghyun Cho · Yoshua Bengio -
2015 Poster: Equilibrated adaptive learning rates for non-convex optimization »
Yann Dauphin · Harm de Vries · Yoshua Bengio -
2015 Spotlight: Attention-Based Models for Speech Recognition »
Jan K Chorowski · Dzmitry Bahdanau · Dmitriy Serdyuk · Kyunghyun Cho · Yoshua Bengio -
2015 Poster: A Recurrent Latent Variable Model for Sequential Data »
Junyoung Chung · Kyle Kastner · Laurent Dinh · Kratarth Goel · Aaron Courville · Yoshua Bengio -
2015 Poster: BinaryConnect: Training Deep Neural Networks with binary weights during propagations »
Matthieu Courbariaux · Yoshua Bengio · Jean-Pierre David -
2013 Poster: Stochastic Ratio Matching of RBMs for Sparse High-Dimensional Inputs »
Yann Dauphin · Yoshua Bengio