Timezone: »
We consider the bilinear inverse problem of recovering two vectors, x in R^L and w in R^L, from their entrywise product. We consider the case where x and w have known signs and are sparse with respect to known dictionaries of size K and N, respectively. Here, K and N may be larger than, smaller than, or equal to L. We introduce L1BranchHull, which is a convex program posed in the natural parameter space and does not require an approximate solution or initialization in order to be stated or solved. We study the case where x and w are S1 and S2sparse with respect to a random dictionary, with the sparse vectors satisfying an effective sparsity condition, and present a recovery guarantee that depends on the number of measurements as L > Omega(S1+S2)(log(K+N))^2. Numerical experiments verify that the scaling constant in the theorem is not too large. One application of this problem is the sweep distortion removal task in dielectric imaging, where one of the signals is a nonnegative reflectivity, and the other signal lives in a known subspace, for example that given by dominant wavelet coefficients. We also introduce a variants of L1BranchHull for the purposes of tolerating noise and outliers, and for the purpose of recovering piecewise constant signals. We provide an ADMM implementation of these variants and show they can extract piecewise constant behavior from real images.
Author Information
Alireza Aghasi (Institute for Insight)
Ali Ahmed (Information Technology University)
Paul Hand (Northeastern University)
Babhru Joshi (Rice University)
More from the Same Authors

2021 Poster: Scorebased Generative Neural Networks for LargeScale Optimal Transport »
Grady Daniels · Tyler Maunu · Paul Hand 
2021 Workshop: Workshop on Deep Learning and Inverse Problems »
Reinhard Heckel · Paul Hand · Rebecca Willett · christopher metzler · Mahdi Soltanolkotabi 
2020 Workshop: Workshop on Deep Learning and Inverse Problems »
Reinhard Heckel · Paul Hand · Richard Baraniuk · Lenka Zdeborová · Soheil Feizi 
2020 Poster: Nonasymptotic Guarantees for Spiked Matrix Recovery with Generative Priors »
Jorio Cocola · Paul Hand · Vlad Voroninski 
2019 Workshop: Solving inverse problems with deep networks: New architectures, theoretical foundations, and applications »
Reinhard Heckel · Paul Hand · Richard Baraniuk · Joan Bruna · Alexandros Dimakis · Deanna Needell 
2019 Poster: Global Guarantees for Blind Demodulation with Generative Priors »
Paul Hand · Babhru Joshi 
2018 Poster: Blind Deconvolutional Phase Retrieval via Convex Programming »
Ali Ahmed · Alireza Aghasi · Paul Hand 
2018 Spotlight: Blind Deconvolutional Phase Retrieval via Convex Programming »
Ali Ahmed · Alireza Aghasi · Paul Hand 
2018 Poster: Phase Retrieval Under a Generative Prior »
Paul Hand · Oscar Leong · Vlad Voroninski 
2018 Oral: Phase Retrieval Under a Generative Prior »
Paul Hand · Oscar Leong · Vlad Voroninski 
2017 Poster: NetTrim: Convex Pruning of Deep Neural Networks with Performance Guarantee »
Alireza Aghasi · Afshin Abdi · Nam Nguyen · Justin Romberg 
2017 Spotlight: NetTrim: Convex Pruning of Deep Neural Networks with Performance Guarantee »
Alireza Aghasi · Afshin Abdi · Nam Nguyen · Justin Romberg