Timezone: »
Fitting high-dimensional statistical models often requires the use of non-linear parameter estimation procedures. As a consequence, it is generally impossible to obtain an exact characterization of the probability distribution of the parameter estimates. This in turn implies that it is extremely challenging to quantify the uncertainty' associated with a certain parameter estimate. Concretely, no commonly accepted procedure exists for computing classical measures of uncertainty and statistical significance as confidence intervals or p-values. We consider here a broad class of regression problems, and propose an efficient algorithm for constructing confidence intervals and p-values. The resulting confidence intervals have nearly optimal size. When testing for the null hypothesis that a certain parameter is vanishing, our method has nearly optimal power. Our approach is based on constructing a
de-biased' version of regularized M-estimators. The new construction improves over recent work in the field in that it does not assume a special structure on the design matrix. Furthermore, proofs are remarkably simple. We test our method on a diabetes prediction problem.
Author Information
Adel Javanmard (Stanford University)
Andrea Montanari (Stanford)
More from the Same Authors
-
2021 Poster: Streaming Belief Propagation for Community Detection »
Yuchen Wu · Jakab Tardos · Mohammadhossein Bateni · André Linhares · Filipe Miguel Goncalves de Almeida · Andrea Montanari · Ashkan Norouzi-Fard -
2020 Poster: When Do Neural Networks Outperform Kernel Methods? »
Behrooz Ghorbani · Song Mei · Theodor Misiakiewicz · Andrea Montanari -
2019 Poster: Limitations of Lazy Training of Two-layers Neural Network »
Behrooz Ghorbani · Song Mei · Theodor Misiakiewicz · Andrea Montanari -
2019 Spotlight: Limitations of Lazy Training of Two-layers Neural Network »
Behrooz Ghorbani · Song Mei · Theodor Misiakiewicz · Andrea Montanari -
2018 Poster: Contextual Stochastic Block Models »
Yash Deshpande · Subhabrata Sen · Andrea Montanari · Elchanan Mossel -
2018 Spotlight: Contextual Stochastic Block Models »
Yash Deshpande · Subhabrata Sen · Andrea Montanari · Elchanan Mossel -
2017 Poster: Inference in Graphical Models via Semidefinite Programming Hierarchies »
Murat Erdogdu · Yash Deshpande · Andrea Montanari -
2015 : Information-theoretic bounds on learning network dynamics »
Andrea Montanari -
2015 Poster: Convergence rates of sub-sampled Newton methods »
Murat Erdogdu · Andrea Montanari -
2015 Poster: On the Limitation of Spectral Methods: From the Gaussian Hidden Clique Problem to Rank-One Perturbations of Gaussian Tensors »
Andrea Montanari · Daniel Reichman · Ofer Zeitouni -
2014 Poster: A statistical model for tensor PCA »
Emile Richard · Andrea Montanari -
2014 Poster: Cone-Constrained Principal Component Analysis »
Yash Deshpande · Andrea Montanari · Emile Richard -
2014 Poster: Sparse PCA via Covariance Thresholding »
Yash Deshpande · Andrea Montanari -
2013 Poster: Estimating LASSO Risk and Noise Level »
Mohsen Bayati · Murat Erdogdu · Andrea Montanari -
2013 Poster: Model Selection for High-Dimensional Regression under the Generalized Irrepresentability Condition »
Adel Javanmard · Andrea Montanari -
2012 Poster: Efficient Reinforcement Learning for High Dimensional Linear Quadratic Systems »
Morteza Ibrahimi · Adel Javanmard · Benjamin Van Roy -
2010 Poster: Learning Networks of Stochastic Differential Equations »
José Bento · Morteza Ibrahimi · Andrea Montanari -
2010 Poster: The LASSO risk: asymptotic results and real world examples »
Mohsen Bayati · José Bento · Andrea Montanari -
2009 Poster: Matrix Completion from Noisy Entries »
Raghunandan Keshavan · Andrea Montanari · Sewoong Oh -
2009 Poster: Which graphical models are difficult to learn? »
Andrea Montanari · José Bento