Timezone: »
Poster
From Boltzmann Machines to Neural Networks and Back Again
Surbhi Goel · Adam Klivans · Frederic Koehler
Graphical models are powerful tools for modeling high-dimensional data, but learning graphical models in the presence of latent variables is well-known to be difficult. In this work we give new results for learning Restricted Boltzmann Machines, probably the most well-studied class of latent variable models. Our results are based on new connections to learning two-layer neural networks under $\ell_{\infty}$ bounded input; for both problems, we give nearly optimal results under the conjectured hardness of sparse parity with noise. Using the connection between RBMs and feedforward networks, we also initiate the theoretical study of {\em supervised RBMs} \citep{hinton2012practical}, a version of neural-network learning that couples distributional assumptions induced from the underlying graphical model with the architecture of the unknown function class. We then give an algorithm for learning a natural class of supervised RBMs with better runtime than what is possible for its related class of networks without distributional assumptions.
Author Information
Surbhi Goel (Microsoft Research NYC)
Adam Klivans (UT Austin)
Frederic Koehler (MIT)
More from the Same Authors
-
2022 : HotProtein: A Novel Framework for Protein Thermostability Prediction and Editing »
Tianlong Chen · Chengyue Gong · Daniel Diaz · Xuxi Chen · Jordan Wells · Qiang Liu · Zhangyang Wang · Andrew Ellington · Alex Dimakis · Adam Klivans -
2023 Workshop: Mathematics of Modern Machine Learning (M3L) »
Aditi Raghunathan · Alex Damian · Bingbin Liu · Christina Baek · Kaifeng Lyu · Surbhi Goel · Tengyu Ma · Zhiyuan Li -
2022 Poster: Hidden Progress in Deep Learning: SGD Learns Parities Near the Computational Limit »
Boaz Barak · Benjamin Edelman · Surbhi Goel · Sham Kakade · Eran Malach · Cyril Zhang -
2022 Poster: Recurrent Convolutional Neural Networks Learn Succinct Learning Algorithms »
Surbhi Goel · Sham Kakade · Adam Kalai · Cyril Zhang -
2022 Poster: Hardness of Noise-Free Learning for Two-Hidden-Layer Neural Networks »
Sitan Chen · Aravind Gollakota · Adam Klivans · Raghu Meka -
2021 Poster: Efficiently Learning One Hidden Layer ReLU Networks From Queries »
Sitan Chen · Adam Klivans · Raghu Meka -
2021 Poster: Gone Fishing: Neural Active Learning with Fisher Embeddings »
Jordan Ash · Surbhi Goel · Akshay Krishnamurthy · Sham Kakade -
2020 Poster: Learning Some Popular Gaussian Graphical Models without Condition Number Bounds »
Jonathan Kelner · Frederic Koehler · Raghu Meka · Ankur Moitra -
2020 Spotlight: Learning Some Popular Gaussian Graphical Models without Condition Number Bounds »
Jonathan Kelner · Frederic Koehler · Raghu Meka · Ankur Moitra -
2020 Poster: Statistical-Query Lower Bounds via Functional Gradients »
Surbhi Goel · Aravind Gollakota · Adam Klivans -
2020 Poster: Classification Under Misspecification: Halfspaces, Generalized Linear Models, and Evolvability »
Sitan Chen · Frederic Koehler · Ankur Moitra · Morris Yau -
2020 Spotlight: Classification Under Misspecification: Halfspaces, Generalized Linear Models, and Evolvability »
Sitan Chen · Frederic Koehler · Ankur Moitra · Morris Yau -
2019 Poster: Time/Accuracy Tradeoffs for Learning a ReLU with respect to Gaussian Marginals »
Surbhi Goel · Sushrut Karmalkar · Adam Klivans -
2019 Spotlight: Time/Accuracy Tradeoffs for Learning a ReLU with respect to Gaussian Marginals »
Surbhi Goel · Sushrut Karmalkar · Adam Klivans -
2019 Poster: Fast Convergence of Belief Propagation to Global Optima: Beyond Correlation Decay »
Frederic Koehler -
2019 Spotlight: Fast Convergence of Belief Propagation to Global Optima: Beyond Correlation Decay »
Frederic Koehler -
2019 Poster: List-decodable Linear Regression »
Sushrut Karmalkar · Adam Klivans · Pravesh Kothari -
2019 Spotlight: List-decodable Linear Regression »
Sushrut Karmalkar · Adam Klivans · Pravesh Kothari -
2017 Poster: Information Theoretic Properties of Markov Random Fields, and their Algorithmic Applications »
Linus Hamilton · Frederic Koehler · Ankur Moitra -
2017 Poster: Eigenvalue Decay Implies Polynomial-Time Learnability for Neural Networks »
Surbhi Goel · Adam Klivans -
2014 Poster: Sparse Polynomial Learning and Graph Sketching »
Murat Kocaoglu · Karthikeyan Shanmugam · Alex Dimakis · Adam Klivans -
2014 Oral: Sparse Polynomial Learning and Graph Sketching »
Murat Kocaoglu · Karthikeyan Shanmugam · Alex Dimakis · Adam Klivans