Timezone: »
Generative neural networks have been empirically found very promising in providing effective structural priors for compressed sensing, since they can be trained to span lowdimensional data manifolds in highdimensional signal spaces. Despite the nonconvexity of the resulting optimization problem, it has also been shown theoretically that, for neural networks with random Gaussian weights, a signal in the range of the network can be efficiently, approximately recovered from a few noisy measurements. However, a major bottleneck of these theoretical guarantees is a network \emph{expansivity} condition: that each layer of the neural network must be larger than the previous by a logarithmic factor. Our main contribution is to break this strong expansivity assumption, showing that \emph{constant} expansivity suffices to get efficient recovery algorithms, besides it also being informationtheoretically necessary. To overcome the theoretical bottleneck in existing approaches we prove a novel uniform concentration theorem for random functions that might not be Lipschitz but satisfy a relaxed notion which we call ``pseudoLipschitzness.'' Using this theorem we can show that a matrix concentration inequality known as the \emph{Weight Distribution Condition (WDC)}, which was previously only known to hold for Gaussian matrices with logarithmic aspect ratio, in fact holds for constant aspect ratios too. Since WDC is a fundamental matrix concentration inequality in the heart of all existing theoretical guarantees on this problem, our tighter bound immediately yields improvements in all known results in the literature on compressed sensing with deep generative priors, including onebit recovery, phase retrieval, and more.
Author Information
Constantinos Daskalakis (MIT)
Dhruv Rohatgi (Massachusetts Institute of Technology)
Emmanouil Zampetakis (Massachusetts Institute of Technology)
Related Events (a corresponding poster, oral, or spotlight)

2020 Spotlight: ConstantExpansion Suffices for Compressed Sensing with Generative Priors »
Tue. Dec 8th 04:10  04:20 PM Room Orals & Spotlights: Dynamical Sys/Density/Sparsity
More from the Same Authors

2021 : Estimation of Standard Asymmetric Auction Models »
Yeshwanth Cherapanamjeri · Constantinos Daskalakis · Andrew Ilyas · Emmanouil Zampetakis 
2021 : NearOptimal NoRegret Learning in General Games »
Constantinos Daskalakis · Maxwell Fishelson · Noah Golowich 
2021 : Robust Algorithms for GMM Estimation: A Finite Sample Viewpoint »
Dhruv Rohatgi 
2021 : Estimation of Standard Asymmetric Auction Models »
Yeshwanth Cherapanamjeri · Constantinos Daskalakis · Andrew Ilyas · Emmanouil Zampetakis 
2021 : NearOptimal NoRegret Learning in General Games »
Constantinos Daskalakis · Maxwell Fishelson · Noah Golowich 
2021 : Spotlight 4: Estimation of Standard Asymmetric Auction Models »
Yeshwanth Cherapanamjeri · Constantinos Daskalakis · Andrew Ilyas · Emmanouil Zampetakis 
2021 : Zoom Q&A for Contributed talks Session 3 »
Dhruv Rohatgi 
2021 : Contributed talks Session 3 »
Dhruv Rohatgi 
2021 Poster: NearOptimal NoRegret Learning in General Games »
Constantinos Daskalakis · Maxwell Fishelson · Noah Golowich 
2021 Poster: Efficient Truncated Linear Regression with Unknown Noise Variance »
Constantinos Daskalakis · Patroklos Stefanou · Rui Yao · Emmanouil Zampetakis 
2021 Oral: NearOptimal NoRegret Learning in General Games »
Constantinos Daskalakis · Maxwell Fishelson · Noah Golowich 
2020 Poster: Tight lastiterate convergence rates for noregret learning in multiplayer games »
Noah Golowich · Sarath Pattathil · Constantinos Daskalakis 
2020 Poster: Truncated Linear Regression in High Dimensions »
Constantinos Daskalakis · Dhruv Rohatgi · Emmanouil Zampetakis 
2020 Poster: Optimal Approximation  Smoothness Tradeoffs for SoftMax Functions »
Alessandro Epasto · Mohammad Mahdian · Vahab Mirrokni · Emmanouil Zampetakis 
2020 Spotlight: Optimal Approximation  Smoothness Tradeoffs for SoftMax Functions »
Alessandro Epasto · Mohammad Mahdian · Vahab Mirrokni · Emmanouil Zampetakis 
2020 Poster: Independent Policy Gradient Methods for Competitive Reinforcement Learning »
Constantinos Daskalakis · Dylan Foster · Noah Golowich 
2018 : Improving Generative Adversarial Networks using Game Theory and Statistics »
Constantinos Daskalakis 
2018 Poster: Learning and Testing Causal Models with Interventions »
Jayadev Acharya · Arnab Bhattacharyya · Constantinos Daskalakis · Saravanan Kandasamy 
2018 Poster: Smoothed Analysis of Discrete Tensor Decomposition and Assemblies of Neurons »
Nima Anari · Constantinos Daskalakis · Wolfgang Maass · Christos Papadimitriou · Amin Saberi · Santosh Vempala 
2018 Poster: HOGWILD!Gibbs can be PanAccurate »
Constantinos Daskalakis · Nishanth Dikkala · Siddhartha Jayanti 
2018 Poster: The Limit Points of (Optimistic) Gradient Descent in MinMax Optimization »
Constantinos Daskalakis · Ioannis Panageas 
2017 Poster: Concentration of Multilinear Functions of the Ising Model with Applications to Network Data »
Constantinos Daskalakis · Nishanth Dikkala · Gautam Kamath 
2015 Poster: Optimal Testing for Properties of Distributions »
Jayadev Acharya · Constantinos Daskalakis · Gautam Kamath 
2015 Spotlight: Optimal Testing for Properties of Distributions »
Jayadev Acharya · Constantinos Daskalakis · Gautam Kamath