Timezone: »

 
Poster
Painless Stochastic Gradient: Interpolation, Line-Search, and Convergence Rates
Sharan Vaswani · Aaron Mishkin · Issam Laradji · Mark Schmidt · Gauthier Gidel · Simon Lacoste-Julien

Tue Dec 10 05:30 PM -- 07:30 PM (PST) @ East Exhibition Hall B + C #123

Recent works have shown that stochastic gradient descent (SGD) achieves the fast convergence rates of full-batch gradient descent for over-parameterized models satisfying certain interpolation conditions. However, the step-size used in these works depends on unknown quantities and SGD's practical performance heavily relies on the choice of this step-size. We propose to use line-search techniques to automatically set the step-size when training models that can interpolate the data. In the interpolation setting, we prove that SGD with a stochastic variant of the classic Armijo line-search attains the deterministic convergence rates for both convex and strongly-convex functions. Under additional assumptions, SGD with Armijo line-search is shown to achieve fast convergence for non-convex functions. Furthermore, we show that stochastic extra-gradient with a Lipschitz line-search attains linear convergence for an important class of non-convex functions and saddle-point problems satisfying interpolation. To improve the proposed methods' practical performance, we give heuristics to use larger step-sizes and acceleration. We compare the proposed algorithms against numerous optimization methods on standard classification tasks using both kernel methods and deep networks. The proposed methods result in competitive performance across all models and datasets, while being robust to the precise choices of hyper-parameters. For multi-class classification using deep networks, SGD with Armijo line-search results in both faster convergence and better generalization.

Author Information

Sharan Vaswani (Mila, Université de Montréal)
Aaron Mishkin (University of British Columbia)
Issam Laradji (University of British Columbia)
Mark Schmidt (University of British Columbia)
Gauthier Gidel (Mila)

I am a Ph.D student supervised by Simon Lacoste-Julien, I graduated from ENS Ulm and Université Paris-Saclay. I was a visiting PhD student at Sierra. I also worked for 6 months as a freelance Data Scientist for Monsieur Drive (Acquired by Criteo) and I recently co-founded a startup called Krypto. I'm currently pursuing my PhD at Mila. My work focuses on optimization applied to machine learning. More details can be found in my resume. My research is to develop new optimization algorithms and understand the role of optimization in the learning procedure, in short, learn faster and better. I identify to the field of machine learning (NIPS, ICML, AISTATS and ICLR) and optimization (SIAM OP)

Simon Lacoste-Julien (Mila, Université de Montréal & SAIL Montreal)

Simon Lacoste-Julien is an associate professor at Mila and DIRO from Université de Montréal, and Canada CIFAR AI Chair holder. He also heads part time the SAIT AI Lab Montreal from Samsung. His research interests are machine learning and applied math, with applications in related fields like computer vision and natural language processing. He obtained a B.Sc. in math., physics and computer science from McGill, a PhD in computer science from UC Berkeley and a post-doc from the University of Cambridge. He spent a few years as a research faculty at INRIA and École normale supérieure in Paris before coming back to his roots in Montreal in 2016 to answer the call from Yoshua Bengio in growing the Montreal AI ecosystem.

More from the Same Authors