Timezone: »
Sparse coding is typically solved by iterative optimization techniques, such as the Iterative Shrinkage-Thresholding Algorithm (ISTA). Unfolding and learning weights of ISTA using neural networks is a practical way to accelerate estimation. In this paper, we study the selection of adapted step sizes for ISTA. We show that a simple step size strategy can improve the convergence rate of ISTA by leveraging the sparsity of the iterates. However, it is impractical in most large-scale applications. Therefore, we propose a network architecture where only the step sizes of ISTA are learned. We demonstrate that for a large class of unfolded algorithms, if the algorithm converges to the solution of the Lasso, its last layers correspond to ISTA with learned step sizes. Experiments show that our method is competitive with state-of-the-art networks when the solutions are sparse enough.
Author Information
Pierre Ablin (INRIA)
Thomas Moreau (Inria)
Mathurin Massias (Inria)
Alexandre Gramfort (INRIA)
More from the Same Authors
-
2023 Poster: How to Scale Your EMA »
Dan Busbridge · Jason Ramapuram · Pierre Ablin · Tatiana Likhomanenko · Eeshan Gunesh Dhekane · Xavier Suau Cuadros · Russell Webb -
2022 Poster: Benchopt: Reproducible, efficient and collaborative optimization benchmarks »
Thomas Moreau · Mathurin Massias · Alexandre Gramfort · Pierre Ablin · Pierre-Antoine Bannier · Benjamin Charlier · Mathieu Dagréou · Tom Dupre la Tour · Ghislain DURIF · Cassio F. Dantas · Quentin Klopfenstein · Johan Larsson · En Lai · Tanguy Lefort · Benoît Malézieux · Badr MOUFAD · Binh T. Nguyen · Alain Rakotomamonjy · Zaccharie Ramzi · Joseph Salmon · Samuel Vaiter -
2022 Poster: Deep invariant networks with differentiable augmentation layers »
Cédric ROMMEL · Thomas Moreau · Alexandre Gramfort -
2022 Poster: A framework for bilevel optimization that enables stochastic and global variance reduction algorithms »
Mathieu Dagréou · Pierre Ablin · Samuel Vaiter · Thomas Moreau -
2022 Poster: Do Residual Neural Networks discretize Neural Ordinary Differential Equations? »
Michael Sander · Pierre Ablin · Gabriel Peyré -
2021 Poster: Shared Independent Component Analysis for Multi-Subject Neuroimaging »
Hugo Richard · Pierre Ablin · Bertrand Thirion · Alexandre Gramfort · Aapo Hyvarinen -
2020 Poster: Learning to solve TV regularised problems with unrolled algorithms »
Hamza Cherkaoui · Jeremias Sulam · Thomas Moreau -
2020 Poster: Modeling Shared responses in Neuroimaging Studies through MultiView ICA »
Hugo Richard · Luigi Gresele · Aapo Hyvarinen · Bertrand Thirion · Alexandre Gramfort · Pierre Ablin -
2020 Spotlight: Modeling Shared responses in Neuroimaging Studies through MultiView ICA »
Hugo Richard · Luigi Gresele · Aapo Hyvarinen · Bertrand Thirion · Alexandre Gramfort · Pierre Ablin -
2020 Poster: NeuMiss networks: differentiable programming for supervised learning with missing values. »
Marine Le Morvan · Julie Josse · Thomas Moreau · Erwan Scornet · Gael Varoquaux -
2020 Oral: NeuMiss networks: differentiable programming for supervised learning with missing values. »
Marine Le Morvan · Julie Josse · Thomas Moreau · Erwan Scornet · Gael Varoquaux -
2020 Poster: Statistical control for spatio-temporal MEG/EEG source imaging with desparsified mutli-task Lasso »
Jerome-Alexis Chevalier · Joseph Salmon · Alexandre Gramfort · Bertrand Thirion -
2019 Poster: Handling correlated and repeated measurements with the smoothed multivariate square-root Lasso »
Quentin Bertrand · Mathurin Massias · Alexandre Gramfort · Joseph Salmon -
2019 Poster: Manifold-regression to predict from MEG/EEG brain signals without source modeling »
David Sabbagh · Pierre Ablin · Gael Varoquaux · Alexandre Gramfort · Denis A. Engemann -
2018 Poster: Multivariate Convolutional Sparse Coding for Electromagnetic Brain Signals »
Tom Dupré la Tour · Thomas Moreau · Mainak Jas · Alexandre Gramfort -
2016 Poster: GAP Safe Screening Rules for Sparse-Group Lasso »
Eugene Ndiaye · Olivier Fercoq · Alexandre Gramfort · Joseph Salmon -
2015 Poster: GAP Safe screening rules for sparse multi-task and multi-class models »
Eugene Ndiaye · Olivier Fercoq · Alexandre Gramfort · Joseph Salmon