Timezone: »
We consider the problem of learning a coefficient vector x0 from noisy linear observation y=Ax0+w. In many contexts (ranging from model selection to image processing) it is desirable to construct a sparse estimator. In this case, a popular approach consists in solving an l1-penalized least squares problem known as the LASSO or BPDN.
For sequences of matrices A of increasing dimensions, with iid gaussian entries, we prove that the normalized risk of the LASSO converges to a limit, and we obtain an explicit expression for this limit. Our result is the first rigorous derivation of an explicit formula for the asymptotic risk of the LASSO for random instances. The proof technique is based on the analysis of AMP, a recently developed efficient algorithm, that is inspired from graphical models ideas.
Through simulations on real data matrices (gene expression data and hospital medical records) we observe that these results can be relevant in a broad array of practical applications.
Author Information
Mohsen Bayati (Stanford University)
José Bento (Boston College)
Andrea Montanari (Stanford)
More from the Same Authors
-
2022 Poster: Thompson Sampling Efficiently Learns to Control Diffusion Processes »
Mohamad Kazem Shirani Faradonbeh · Mohamad Sadegh Shirani Faradonbeh · Mohsen Bayati -
2021 Poster: Streaming Belief Propagation for Community Detection »
Yuchen Wu · Jakab Tardos · Mohammadhossein Bateni · André Linhares · Filipe Miguel Goncalves de Almeida · Andrea Montanari · Ashkan Norouzi-Fard -
2020 Poster: Unreasonable Effectiveness of Greedy Algorithms in Multi-Armed Bandit with Many Arms »
Mohsen Bayati · Nima Hamidi · Ramesh Johari · Khashayar Khosravi -
2020 Spotlight: Unreasonable Effectiveness of Greedy Algorithms in Multi-Armed Bandit with Many Arms »
Mohsen Bayati · Nima Hamidi · Ramesh Johari · Khashayar Khosravi -
2020 Poster: When Do Neural Networks Outperform Kernel Methods? »
Behrooz Ghorbani · Song Mei · Theodor Misiakiewicz · Andrea Montanari -
2019 Poster: Limitations of Lazy Training of Two-layers Neural Network »
Behrooz Ghorbani · Song Mei · Theodor Misiakiewicz · Andrea Montanari -
2019 Spotlight: Limitations of Lazy Training of Two-layers Neural Network »
Behrooz Ghorbani · Song Mei · Theodor Misiakiewicz · Andrea Montanari -
2019 Poster: Personalizing Many Decisions with High-Dimensional Covariates »
Nima Hamidi · Mohsen Bayati · Kapil Gupta -
2018 Poster: Efficient Projection onto the Perfect Phylogeny Model »
Bei Jia · Surjyendu Ray · Sam Safavi · José Bento -
2018 Poster: Contextual Stochastic Block Models »
Yash Deshpande · Subhabrata Sen · Andrea Montanari · Elchanan Mossel -
2018 Spotlight: Contextual Stochastic Block Models »
Yash Deshpande · Subhabrata Sen · Andrea Montanari · Elchanan Mossel -
2017 Poster: Inference in Graphical Models via Semidefinite Programming Hierarchies »
Murat Erdogdu · Yash Deshpande · Andrea Montanari -
2016 Poster: Scaled Least Squares Estimator for GLMs in Large-Scale Problems »
Murat Erdogdu · Lee H Dicker · Mohsen Bayati -
2015 : Information-theoretic bounds on learning network dynamics »
Andrea Montanari -
2015 : Learning Stochastic Differential Equations »
José Bento -
2015 Poster: Convergence rates of sub-sampled Newton methods »
Murat Erdogdu · Andrea Montanari -
2015 Poster: On the Limitation of Spectral Methods: From the Gaussian Hidden Clique Problem to Rank-One Perturbations of Gaussian Tensors »
Andrea Montanari · Daniel Reichman · Ofer Zeitouni -
2014 Poster: A statistical model for tensor PCA »
Emile Richard · Andrea Montanari -
2014 Poster: Cone-Constrained Principal Component Analysis »
Yash Deshpande · Andrea Montanari · Emile Richard -
2014 Poster: Shape and Illumination from Shading using the Generic Viewpoint Assumption »
Daniel Zoran · Dilip Krishnan · José Bento · Bill Freeman -
2014 Poster: Sparse PCA via Covariance Thresholding »
Yash Deshpande · Andrea Montanari -
2013 Demonstration: The Three-Weight Algorithm: Enhancing ADMM for Large-Scale Distributed Optimization »
Nate Derbinsky · José Bento · Jonathan S Yedidia -
2013 Poster: A message-passing algorithm for multi-agent trajectory planning »
José Bento · Nate Derbinsky · Javier Alonso-Mora · Jonathan S Yedidia -
2013 Poster: Estimating LASSO Risk and Noise Level »
Mohsen Bayati · Murat Erdogdu · Andrea Montanari -
2013 Poster: Confidence Intervals and Hypothesis Testing for High-Dimensional Statistical Models »
Adel Javanmard · Andrea Montanari -
2013 Poster: Model Selection for High-Dimensional Regression under the Generalized Irrepresentability Condition »
Adel Javanmard · Andrea Montanari -
2010 Poster: Learning Networks of Stochastic Differential Equations »
José Bento · Morteza Ibrahimi · Andrea Montanari -
2009 Poster: Matrix Completion from Noisy Entries »
Raghunandan Keshavan · Andrea Montanari · Sewoong Oh -
2009 Poster: Which graphical models are difficult to learn? »
Andrea Montanari · José Bento