Learning Some Popular Gaussian Graphical Models without Condition Number Bounds
Jonathan Kelner, Frederic Koehler, Raghu Meka, Ankur Moitra
Spotlight presentation: Orals & Spotlights Track 35: Neuroscience/Probabilistic
on 2020-12-10T19:00:00-08:00 - 2020-12-10T19:10:00-08:00
on 2020-12-10T19:00:00-08:00 - 2020-12-10T19:10:00-08:00
Toggle Abstract Paper (in Proceedings / .pdf)
Abstract: Gaussian Graphical Models (GGMs) have wide-ranging applications in machine learning and the natural and social sciences. In most of the settings in which they are applied, the number of observed samples is much smaller than the dimension and they are assumed to be sparse. While there are a variety of algorithms (e.g. Graphical Lasso, CLIME) that provably recover the graph structure with a logarithmic number of samples, to do so they require various assumptions on the well-conditioning of the precision matrix that are not information-theoretically necessary. Here we give the first fixed polynomial-time algorithms for learning attractive GGMs and walk-summable GGMs with a logarithmic number of samples without any such assumptions. In particular, our algorithms can tolerate strong dependencies among the variables. Our result for structure recovery in walk-summable GGMs is derived from a more general result for efficient sparse linear regression in walk-summable models without any norm dependencies. We complement our results with experiments showing that many existing algorithms fail even in some simple settings where there are long dependency chains. Our algorithms do not.