Timezone: »
Generative neural networks have been empirically found very promising in providing effective structural priors for compressed sensing, since they can be trained to span low-dimensional data manifolds in high-dimensional signal spaces. Despite the non-convexity of the resulting optimization problem, it has also been shown theoretically that, for neural networks with random Gaussian weights, a signal in the range of the network can be efficiently, approximately recovered from a few noisy measurements. However, a major bottleneck of these theoretical guarantees is a network \emph{expansivity} condition: that each layer of the neural network must be larger than the previous by a logarithmic factor. Our main contribution is to break this strong expansivity assumption, showing that \emph{constant} expansivity suffices to get efficient recovery algorithms, besides it also being information-theoretically necessary. To overcome the theoretical bottleneck in existing approaches we prove a novel uniform concentration theorem for random functions that might not be Lipschitz but satisfy a relaxed notion which we call ``pseudo-Lipschitzness.'' Using this theorem we can show that a matrix concentration inequality known as the \emph{Weight Distribution Condition (WDC)}, which was previously only known to hold for Gaussian matrices with logarithmic aspect ratio, in fact holds for constant aspect ratios too. Since WDC is a fundamental matrix concentration inequality in the heart of all existing theoretical guarantees on this problem, our tighter bound immediately yields improvements in all known results in the literature on compressed sensing with deep generative priors, including one-bit recovery, phase retrieval, and more.
Author Information
Constantinos Daskalakis (MIT)
Dhruv Rohatgi (Massachusetts Institute of Technology)
Emmanouil Zampetakis (Massachusetts Institute of Technology)
Related Events (a corresponding poster, oral, or spotlight)
-
2020 Spotlight: Constant-Expansion Suffices for Compressed Sensing with Generative Priors »
Tue. Dec 8th 04:10 -- 04:20 PM Room Orals & Spotlights: Dynamical Sys/Density/Sparsity
More from the Same Authors
-
2021 : Estimation of Standard Asymmetric Auction Models »
Yeshwanth Cherapanamjeri · Constantinos Daskalakis · Andrew Ilyas · Emmanouil Zampetakis -
2021 : Near-Optimal No-Regret Learning in General Games »
Constantinos Daskalakis · Maxwell Fishelson · Noah Golowich -
2021 : Robust Algorithms for GMM Estimation: A Finite Sample Viewpoint »
Dhruv Rohatgi -
2021 : Estimation of Standard Asymmetric Auction Models »
Yeshwanth Cherapanamjeri · Constantinos Daskalakis · Andrew Ilyas · Emmanouil Zampetakis -
2021 : Near-Optimal No-Regret Learning in General Games »
Constantinos Daskalakis · Maxwell Fishelson · Noah Golowich -
2023 Poster: Provable benefits of score matching »
Chirag Pabbaraju · Dhruv Rohatgi · Anish Prasad Sevekari · Holden Lee · Ankur Moitra · Andrej Risteski -
2023 Poster: Feature Adaptation for Sparse Linear Regression »
Jonathan Kelner · Frederic Koehler · Raghu Meka · Dhruv Rohatgi -
2023 Poster: Learning Exponential Families from Truncated Samples »
Jane Lee · Andre Wibisono · Emmanouil Zampetakis -
2023 Poster: Martingale Diffusion Models: Mitigating Sampling Drift by Learning to be Consistent »
Giannis Daras · Yuval Dagan · Alex Dimakis · Constantinos Daskalakis -
2022 Poster: Robust Generalized Method of Moments: A Finite Sample Viewpoint »
Dhruv Rohatgi · Vasilis Syrgkanis -
2022 Poster: Lower Bounds on Randomly Preconditioned Lasso via Robust Sparse Designs »
Jonathan Kelner · Frederic Koehler · Raghu Meka · Dhruv Rohatgi -
2022 Poster: Learning in Observable POMDPs, without Computationally Intractable Oracles »
Noah Golowich · Ankur Moitra · Dhruv Rohatgi -
2022 Poster: Learning and Covering Sums of Independent Random Variables with Unbounded Support »
Alkis Kalavasis · Konstantinos Stavropoulos · Emmanouil Zampetakis -
2021 : Spotlight 4: Estimation of Standard Asymmetric Auction Models »
Yeshwanth Cherapanamjeri · Constantinos Daskalakis · Andrew Ilyas · Emmanouil Zampetakis -
2021 : Zoom Q&A for Contributed talks Session 3 »
Dhruv Rohatgi -
2021 : Contributed talks Session 3 »
Dhruv Rohatgi -
2021 Poster: Near-Optimal No-Regret Learning in General Games »
Constantinos Daskalakis · Maxwell Fishelson · Noah Golowich -
2021 Poster: Efficient Truncated Linear Regression with Unknown Noise Variance »
Constantinos Daskalakis · Patroklos Stefanou · Rui Yao · Emmanouil Zampetakis -
2021 Oral: Near-Optimal No-Regret Learning in General Games »
Constantinos Daskalakis · Maxwell Fishelson · Noah Golowich -
2020 Poster: Tight last-iterate convergence rates for no-regret learning in multi-player games »
Noah Golowich · Sarath Pattathil · Constantinos Daskalakis -
2020 Poster: Truncated Linear Regression in High Dimensions »
Constantinos Daskalakis · Dhruv Rohatgi · Emmanouil Zampetakis -
2020 Poster: Optimal Approximation - Smoothness Tradeoffs for Soft-Max Functions »
Alessandro Epasto · Mohammad Mahdian · Vahab Mirrokni · Emmanouil Zampetakis -
2020 Spotlight: Optimal Approximation - Smoothness Tradeoffs for Soft-Max Functions »
Alessandro Epasto · Mohammad Mahdian · Vahab Mirrokni · Emmanouil Zampetakis -
2020 Poster: Independent Policy Gradient Methods for Competitive Reinforcement Learning »
Constantinos Daskalakis · Dylan Foster · Noah Golowich -
2018 : Improving Generative Adversarial Networks using Game Theory and Statistics »
Constantinos Daskalakis -
2018 Poster: Learning and Testing Causal Models with Interventions »
Jayadev Acharya · Arnab Bhattacharyya · Constantinos Daskalakis · Saravanan Kandasamy -
2018 Poster: Smoothed Analysis of Discrete Tensor Decomposition and Assemblies of Neurons »
Nima Anari · Constantinos Daskalakis · Wolfgang Maass · Christos Papadimitriou · Amin Saberi · Santosh Vempala -
2018 Poster: HOGWILD!-Gibbs can be PanAccurate »
Constantinos Daskalakis · Nishanth Dikkala · Siddhartha Jayanti -
2018 Poster: The Limit Points of (Optimistic) Gradient Descent in Min-Max Optimization »
Constantinos Daskalakis · Ioannis Panageas -
2017 Poster: Concentration of Multilinear Functions of the Ising Model with Applications to Network Data »
Constantinos Daskalakis · Nishanth Dikkala · Gautam Kamath -
2015 Poster: Optimal Testing for Properties of Distributions »
Jayadev Acharya · Constantinos Daskalakis · Gautam Kamath -
2015 Spotlight: Optimal Testing for Properties of Distributions »
Jayadev Acharya · Constantinos Daskalakis · Gautam Kamath