Timezone: »
Poster
Implicit Regularization for Optimal Sparse Recovery
Tomas Vaskevicius · Varun Kanade · Patrick Rebeschini
Wed Dec 11 10:45 AM -- 12:45 PM (PST) @ East Exhibition Hall B + C #235
We investigate implicit regularization schemes for gradient descent methods applied to unpenalized least squares regression to solve the problem of reconstructing a sparse signal from an underdetermined system of linear measurements under the restricted isometry assumption. For a given parametrization yielding a non-convex optimization problem, we show that prescribed choices of initialization, step size and stopping time yield a statistically and computationally optimal algorithm that achieves the minimax rate with the same cost required to read the data up to poly-logarithmic factors. Beyond minimax optimality, we show that our algorithm adapts to instance difficulty and yields a dimension-independent rate when the signal-to-noise ratio is high enough. Key to the computational efficiency of our method is an increasing step size scheme that adapts to refined estimates of the true solution. We validate our findings with numerical experiments and compare our algorithm against explicit $\ell_{1}$ penalization. Going from hard instances to easy ones, our algorithm is seen to undergo a phase transition, eventually matching least squares with an oracle knowledge of the true support.
Author Information
Tomas Vaskevicius (University of Oxford)
Varun Kanade (University of Oxford)
Patrick Rebeschini (University of Oxford)
More from the Same Authors
-
2023 Poster: A Novel Framework for Policy Mirror Descent with General Parametrization and Linear Convergence »
Carlo Alfano · Rui Yuan · Patrick Rebeschini -
2023 Poster: Optimal Convergence Rate for Exact Policy Mirror Descent in Discounted Markov Decision Processes »
Emmeran Johnson · Ciara Pike-Burke · Patrick Rebeschini -
2023 Poster: Computational Guarantees for Doubly Entropic Wasserstein Barycenters »
Lénaïc Chizat · Tomas Vaskevicius -
2021 Poster: Implicit Regularization in Matrix Sensing via Mirror Descent »
Fan Wu · Patrick Rebeschini -
2021 Poster: Distributed Machine Learning with Sparse Heterogeneous Data »
Dominic Richards · Sahand Negahban · Patrick Rebeschini -
2021 Poster: Time-independent Generalization Bounds for SGLD in Non-convex Settings »
Tyler Farghly · Patrick Rebeschini -
2021 Poster: On Optimal Interpolation in Linear Regression »
Eduard Oravkin · Patrick Rebeschini -
2020 Poster: A Continuous-Time Mirror Descent Approach to Sparse Phase Retrieval »
Fan Wu · Patrick Rebeschini -
2020 Spotlight: A Continuous-Time Mirror Descent Approach to Sparse Phase Retrieval »
Fan Wu · Patrick Rebeschini -
2020 Poster: The Statistical Complexity of Early-Stopped Mirror Descent »
Tomas Vaskevicius · Varun Kanade · Patrick Rebeschini -
2020 Spotlight: The Statistical Complexity of Early-Stopped Mirror Descent »
Tomas Vaskevicius · Varun Kanade · Patrick Rebeschini -
2020 Poster: Adaptive Reduced Rank Regression »
Qiong Wu · Felix MF Wong · Yanhua Li · Zhenming Liu · Varun Kanade -
2019 Poster: Optimal Statistical Rates for Decentralised Non-Parametric Regression with Linear Speed-Up »
Dominic Richards · Patrick Rebeschini -
2019 Poster: Decentralized Cooperative Stochastic Bandits »
David Martínez-Rubio · Varun Kanade · Patrick Rebeschini -
2019 Poster: On the Hardness of Robust Classification »
Pascale Gourdeau · Varun Kanade · Marta Kwiatkowska · James Worrell -
2019 Spotlight: On the Hardness of Robust Classification »
Pascale Gourdeau · Varun Kanade · Marta Kwiatkowska · James Worrell -
2017 Poster: Accelerated consensus via Min-Sum Splitting »
Patrick Rebeschini · Sekhar C Tatikonda