Timezone: »
Poster
Global Guarantees for Blind Demodulation with Generative Priors
Paul Hand · Babhru Joshi
Thu Dec 12 05:00 PM -- 07:00 PM (PST) @ East Exhibition Hall B + C #46
We study a deep learning inspired formulation for the blind demodulation problem, which is the task of recovering two unknown vectors from their entrywise multiplication. We consider the case where the unknown vectors are in the range of known deep generative models, $\mathcal{G}^{(1)}:\mathbb{R}^n\rightarrow\mathbb{R}^\ell$ and $\mathcal{G}^{(2)}:\mathbb{R}^p\rightarrow\mathbb{R}^\ell$. In the case when the networks corresponding to the generative models are expansive, the weight matrices are random and the dimension of the unknown vectors satisfy $\ell = \Omega(n^2+p^2)$, up to log factors, we show that the empirical risk objective has a favorable landscape for optimization. That is, the objective function has a descent direction at every point outside of a small neighborhood around four hyperbolic curves. We also characterize the local maximizers of the empirical risk objective and, hence, show that there does not exist any other stationary points outside of these neighborhood around four hyperbolic curves and the set of local maximizers. We also implement a gradient descent scheme inspired by the geometry of the landscape of the objective function. In order to converge to a global minimizer, this gradient descent scheme exploits the fact that exactly one of the hyperbolic curve corresponds to the global minimizer, and thus points near this hyperbolic curve have a lower objective value than points close to the other spurious hyperbolic curves. We show that this gradient descent scheme can effectively remove distortions synthetically introduced to the MNIST dataset.
Author Information
Paul Hand (Northeastern University)
Babhru Joshi (University of British Columbia)
More from the Same Authors
-
2021 Spotlight: PLUGIn: A simple algorithm for inverting generative models with recovery guarantees »
Babhru Joshi · Xiaowei Li · Yaniv Plan · Ozgur Yilmaz -
2023 Workshop: Learning-Based Solutions for Inverse Problems »
Shirin Jalali · christopher metzler · Ajil Jalal · Jon Tamir · Reinhard Heckel · Paul Hand · Arian Maleki · Richard Baraniuk -
2021 Workshop: Workshop on Deep Learning and Inverse Problems »
Reinhard Heckel · Paul Hand · Rebecca Willett · christopher metzler · Mahdi Soltanolkotabi -
2021 Poster: PLUGIn: A simple algorithm for inverting generative models with recovery guarantees »
Babhru Joshi · Xiaowei Li · Yaniv Plan · Ozgur Yilmaz -
2021 Poster: Score-based Generative Neural Networks for Large-Scale Optimal Transport »
Grady Daniels · Tyler Maunu · Paul Hand -
2020 : Opening Remarks »
Reinhard Heckel · Paul Hand · Soheil Feizi · Lenka Zdeborová · Richard Baraniuk -
2020 Workshop: Workshop on Deep Learning and Inverse Problems »
Reinhard Heckel · Paul Hand · Richard Baraniuk · Lenka Zdeborová · Soheil Feizi -
2020 : Newcomer presentation »
Reinhard Heckel · Paul Hand -
2020 Poster: Nonasymptotic Guarantees for Spiked Matrix Recovery with Generative Priors »
Jorio Cocola · Paul Hand · Vlad Voroninski -
2019 : Opening Remarks »
Reinhard Heckel · Paul Hand · Alex Dimakis · Joan Bruna · Deanna Needell · Richard Baraniuk -
2019 Workshop: Solving inverse problems with deep networks: New architectures, theoretical foundations, and applications »
Reinhard Heckel · Paul Hand · Richard Baraniuk · Joan Bruna · Alex Dimakis · Deanna Needell -
2018 Poster: A convex program for bilinear inversion of sparse vectors »
Alireza Aghasi · Ali Ahmed · Paul Hand · Babhru Joshi -
2018 Poster: Blind Deconvolutional Phase Retrieval via Convex Programming »
Ali Ahmed · Alireza Aghasi · Paul Hand -
2018 Spotlight: Blind Deconvolutional Phase Retrieval via Convex Programming »
Ali Ahmed · Alireza Aghasi · Paul Hand -
2018 Poster: Phase Retrieval Under a Generative Prior »
Paul Hand · Oscar Leong · Vlad Voroninski -
2018 Oral: Phase Retrieval Under a Generative Prior »
Paul Hand · Oscar Leong · Vlad Voroninski