Timezone: »

Lower Bounds on Randomly Preconditioned Lasso via Robust Sparse Designs
Jonathan Kelner · Frederic Koehler · Raghu Meka · Dhruv Rohatgi

Thu Dec 01 09:00 AM -- 11:00 AM (PST) @ Hall J #616
Sparse linear regression with ill-conditioned Gaussian random covariates is widely believed to exhibit a statistical/computational gap, but there is surprisingly little formal evidence for this belief. Recent work has shown that, for certain covariance matrices, the broad class of Preconditioned Lasso programs provably cannot succeed on polylogarithmically sparse signals with a sublinear number of samples. However, this lower bound only holds against deterministic preconditioners, and in many contexts randomization is crucial to the success of preconditioners. We prove a stronger lower bound that rules out randomized preconditioners. For an appropriate covariance matrix, we construct a single signal distribution on which any invertibly-preconditioned Lasso program fails with high probability, unless it receives a linear number of samples. Surprisingly, at the heart of our lower bound is a new robustness result in compressed sensing. In particular, we study recovering a sparse signal when a few measurements can be erased adversarially. To our knowledge, this natural question has not been studied before for sparse measurements. We surprisingly show that standard sparse Bernoulli measurements are almost-optimally robust to adversarial erasures: if $b$ measurements are erased, then all but $O(b)$ of the coordinates of the signal are identifiable.

Author Information

Jonathan Kelner (MIT)
Frederic Koehler (MIT)
Raghu Meka (UCLA)
Dhruv Rohatgi (Massachusetts Institute of Technology)

More from the Same Authors