Timezone: »
Due to their simplicity and excellent performance, parallel asynchronous variants of stochastic gradient descent have become popular methods to solve a wide range of large-scale optimization problems on multi-core architectures. Yet, despite their practical success, support for nonsmooth objectives is still lacking, making them unsuitable for many problems of interest in machine learning, such as the Lasso, group Lasso or empirical risk minimization with convex constraints. In this work, we propose and analyze ProxASAGA, a fully asynchronous sparse method inspired by SAGA, a variance reduced incremental gradient algorithm. The proposed method is easy to implement and significantly outperforms the state of the art on several nonsmooth, large-scale problems. We prove that our method achieves a theoretical linear speedup with respect to the sequential version under assumptions on the sparsity of gradients and block-separability of the proximal term. Empirical benchmarks on a multi-core architecture illustrate practical speedups of up to 12x on a 20-core machine.
Author Information
Fabian Pedregosa (UC Berkeley / ETH Zurich)
Rémi Leblond (INRIA)
Simon Lacoste-Julien (Université de Montréal)
Related Events (a corresponding poster, oral, or spotlight)
-
2017 Spotlight: Breaking the Nonsmooth Barrier: A Scalable Parallel Method for Composite Optimization »
Wed. Dec 6th 11:20 -- 11:25 PM Room Hall C
More from the Same Authors
-
2019 : Spotlight talks »
Diego Granziol · Fabian Pedregosa · Hilal Asi -
2017 Poster: On Structured Prediction Theory with Calibrated Convex Surrogate Losses »
Anton Osokin · Francis Bach · Simon Lacoste-Julien -
2017 Oral: On Structured Prediction Theory with Calibrated Convex Surrogate Losses »
Anton Osokin · Francis Bach · Simon Lacoste-Julien -
2015 Poster: On the Global Linear Convergence of Frank-Wolfe Optimization Variants »
Simon Lacoste-Julien · Martin Jaggi -
2015 Poster: Barrier Frank-Wolfe for Marginal Inference »
Rahul G Krishnan · Simon Lacoste-Julien · David Sontag -
2015 Poster: Variance Reduced Stochastic Gradient Descent with Neighbors »
Thomas Hofmann · Aurelien Lucchi · Simon Lacoste-Julien · Brian McWilliams -
2015 Poster: Rethinking LDA: Moment Matching for Discrete ICA »
Anastasia Podosinnikova · Francis Bach · Simon Lacoste-Julien -
2014 Poster: SAGA: A Fast Incremental Gradient Method With Support for Non-Strongly Convex Composite Objectives »
Aaron Defazio · Francis Bach · Simon Lacoste-Julien -
2009 Workshop: The Generative and Discriminative Learning Interface »
Simon Lacoste-Julien · Percy Liang · Guillaume Bouchard -
2008 Poster: DiscLDA: Discriminative Learning for Dimensionality Reduction and Classification »
Simon Lacoste-Julien · Fei Sha · Michael Jordan