Timezone: »
Total Variation (TV) is a popular regularization strategy that promotes piece-wise constant signals by constraining the ℓ1-norm of the first order derivative of the estimated signal. The resulting optimization problem is usually solved using iterative algorithms such as proximal gradient descent, primal-dual algorithms or ADMM. However, such methods can require a very large number of iterations to converge to a suitable solution. In this paper, we accelerate such iterative algorithms by unfolding proximal gradient descent solvers in order to learn their parameters for 1D TV regularized problems. While this could be done using the synthesis formulation, we demonstrate that this leads to slower performances. The main difficulty in applying such methods in the analysis formulation lies in proposing a way to compute the derivatives through the proximal operator. As our main contribution, we develop and characterize two approaches to do so, describe their benefits and limitations, and discuss the regime where they can actually improve over iterative procedures. We validate those findings with experiments on synthetic and real data.
Author Information
Hamza Cherkaoui (CEA)
Jeremias Sulam (Johns Hopkins University)
Thomas Moreau (Inria)
More from the Same Authors
-
2021 Spotlight: A Geometric Analysis of Neural Collapse with Unconstrained Features »
Zhihui Zhu · Tianyu Ding · Jinxin Zhou · Xiao Li · Chong You · Jeremias Sulam · Qing Qu -
2022 : DeepSTI: Towards Tensor Reconstruction using Fewer Orientations in Susceptibility Tensor Imaging »
Zhenghan Fang · Kuo-Wei Lai · Peter van Zijl · Xu Li · Jeremias Sulam -
2022 Poster: Benchopt: Reproducible, efficient and collaborative optimization benchmarks »
Thomas Moreau · Mathurin Massias · Alexandre Gramfort · Pierre Ablin · Pierre-Antoine Bannier · Benjamin Charlier · Mathieu Dagréou · Tom Dupre la Tour · Ghislain DURIF · Cassio F. Dantas · Quentin Klopfenstein · Johan Larsson · En Lai · Tanguy Lefort · Benoît Malézieux · Badr MOUFAD · Binh T. Nguyen · Alain Rakotomamonjy · Zaccharie Ramzi · Joseph Salmon · Samuel Vaiter -
2022 Poster: Deep invariant networks with differentiable augmentation layers »
Cédric ROMMEL · Thomas Moreau · Alexandre Gramfort -
2022 Poster: A framework for bilevel optimization that enables stochastic and global variance reduction algorithms »
Mathieu Dagréou · Pierre Ablin · Samuel Vaiter · Thomas Moreau -
2022 Poster: Recovery and Generalization in Over-Realized Dictionary Learning »
Jeremias Sulam · Chong You · Zhihui Zhu -
2021 Poster: A Geometric Analysis of Neural Collapse with Unconstrained Features »
Zhihui Zhu · Tianyu Ding · Jinxin Zhou · Xiao Li · Chong You · Jeremias Sulam · Qing Qu -
2020 Poster: Conformal Symplectic and Relativistic Optimization »
Guilherme Franca · Jeremias Sulam · Daniel Robinson · Rene Vidal -
2020 Spotlight: Conformal Symplectic and Relativistic Optimization »
Guilherme Franca · Jeremias Sulam · Daniel Robinson · Rene Vidal -
2020 Poster: Adversarial Robustness of Supervised Sparse Coding »
Jeremias Sulam · Ramchandran Muthukumar · Raman Arora -
2020 Poster: NeuMiss networks: differentiable programming for supervised learning with missing values. »
Marine Le Morvan · Julie Josse · Thomas Moreau · Erwan Scornet · Gael Varoquaux -
2020 Oral: NeuMiss networks: differentiable programming for supervised learning with missing values. »
Marine Le Morvan · Julie Josse · Thomas Moreau · Erwan Scornet · Gael Varoquaux -
2019 Poster: Learning step sizes for unfolded sparse coding »
Pierre Ablin · Thomas Moreau · Mathurin Massias · Alexandre Gramfort -
2018 Poster: Multivariate Convolutional Sparse Coding for Electromagnetic Brain Signals »
Tom Dupré la Tour · Thomas Moreau · Mainak Jas · Alexandre Gramfort