Timezone: »
Gaussian Graphical Models (GGMs) have wide-ranging applications in machine learning and the natural and social sciences. In most of the settings in which they are applied, the number of observed samples is much smaller than the dimension and they are assumed to be sparse. While there are a variety of algorithms (e.g. Graphical Lasso, CLIME) that provably recover the graph structure with a logarithmic number of samples, to do so they require various assumptions on the well-conditioning of the precision matrix that are not information-theoretically necessary.
Here we give the first fixed polynomial-time algorithms for learning attractive GGMs and walk-summable GGMs with a logarithmic number of samples without any such assumptions. In particular, our algorithms can tolerate strong dependencies among the variables. Our result for structure recovery in walk-summable GGMs is derived from a more general result for efficient sparse linear regression in walk-summable models without any norm dependencies. We complement our results with experiments showing that many existing algorithms fail even in some simple settings where there are long dependency chains. Our algorithms do not.
Author Information
Jonathan Kelner (MIT)
Frederic Koehler (MIT)
Raghu Meka (UCLA)
Ankur Moitra (MIT)
Related Events (a corresponding poster, oral, or spotlight)
-
2020 Spotlight: Learning Some Popular Gaussian Graphical Models without Condition Number Bounds »
Fri. Dec 11th 03:00 -- 03:10 AM Room Orals & Spotlights: Neuroscience/Probabilistic
More from the Same Authors
-
2022 : Semi-Random Sparse Recovery in Nearly-Linear Time »
Jonathan Kelner · Jerry Li · Allen Liu · Aaron Sidford · Kevin Tian -
2023 Poster: Provable benefits of score matching »
Chirag Pabbaraju · Dhruv Rohatgi · Anish Prasad Sevekari · Holden Lee · Ankur Moitra · Andrej Risteski -
2023 Poster: Feature Adaptation for Sparse Linear Regression »
Jonathan Kelner · Frederic Koehler · Raghu Meka · Dhruv Rohatgi -
2023 Poster: User-Level Differential Privacy With Few Examples Per User »
Badih Ghazi · Pritish Kamath · Ravi Kumar · Pasin Manurangsi · Raghu Meka · Chiyuan Zhang -
2023 Oral: User-Level Differential Privacy With Few Examples Per User »
Badih Ghazi · Pritish Kamath · Ravi Kumar · Pasin Manurangsi · Raghu Meka · Chiyuan Zhang -
2022 Poster: Polynomial time guarantees for the Burer-Monteiro method »
Diego Cifuentes · Ankur Moitra -
2022 Poster: Lower Bounds on Randomly Preconditioned Lasso via Robust Sparse Designs »
Jonathan Kelner · Frederic Koehler · Raghu Meka · Dhruv Rohatgi -
2022 Poster: Sketching based Representations for Robust Image Classification with Provable Guarantees »
Nishanth Dikkala · Sankeerth Rao Karingula · Raghu Meka · Jelani Nelson · Rina Panigrahy · Xin Wang -
2022 Poster: Learning in Observable POMDPs, without Computationally Intractable Oracles »
Noah Golowich · Ankur Moitra · Dhruv Rohatgi -
2022 Poster: Robust Model Selection and Nearly-Proper Learning for GMMs »
Allen Liu · Jerry Li · Ankur Moitra -
2022 Poster: Hardness of Noise-Free Learning for Two-Hidden-Layer Neural Networks »
Sitan Chen · Aravind Gollakota · Adam Klivans · Raghu Meka -
2021 Poster: Efficiently Learning One Hidden Layer ReLU Networks From Queries »
Sitan Chen · Adam Klivans · Raghu Meka -
2021 Poster: A No-go Theorem for Robust Acceleration in the Hyperbolic Plane »
Linus Hamilton · Ankur Moitra -
2020 Poster: From Boltzmann Machines to Neural Networks and Back Again »
Surbhi Goel · Adam Klivans · Frederic Koehler -
2020 Poster: Tensor Completion Made Practical »
Allen Liu · Ankur Moitra -
2020 Poster: Classification Under Misspecification: Halfspaces, Generalized Linear Models, and Evolvability »
Sitan Chen · Frederic Koehler · Ankur Moitra · Morris Yau -
2020 Poster: Learning Structured Distributions From Untrusted Batches: Faster and Simpler »
Sitan Chen · Jerry Li · Ankur Moitra -
2020 Spotlight: Classification Under Misspecification: Halfspaces, Generalized Linear Models, and Evolvability »
Sitan Chen · Frederic Koehler · Ankur Moitra · Morris Yau -
2019 Poster: Fast Convergence of Belief Propagation to Global Optima: Beyond Correlation Decay »
Frederic Koehler -
2019 Spotlight: Fast Convergence of Belief Propagation to Global Optima: Beyond Correlation Decay »
Frederic Koehler -
2017 Poster: Information Theoretic Properties of Markov Random Fields, and their Algorithmic Applications »
Linus Hamilton · Frederic Koehler · Ankur Moitra