Timezone: »
The iterations of many first-order algorithms, when applied to minimizing common regularized regression functions, often resemble neural network layers with pre-specified weights. This observation has prompted the development of learning-based approaches that purport to replace these iterations with enhanced surrogates forged as DNN models from available training data. For example, important NP-hard sparse estimation problems have recently benefitted from this genre of upgrade, with simple feedforward or recurrent networks ousting proximal gradient-based iterations. Analogously, this paper demonstrates that more powerful Bayesian algorithms for promoting sparsity, which rely on complex multi-loop majorization-minimization techniques, mirror the structure of more sophisticated long short-term memory (LSTM) networks, or alternative gated feedback networks previously designed for sequence prediction. As part of this development, we examine the parallels between latent variable trajectories operating across multiple time-scales during optimization, and the activations within deep network structures designed to adaptively model such characteristic sequences. The resulting insights lead to a novel sparse estimation system that, when granted training data, can estimate optimal solutions efficiently in regimes where other algorithms fail, including practical direction-of-arrival (DOA) and 3D geometry recovery problems. The underlying principles we expose are also suggestive of a learning process for a richer class of multi-loop algorithms in other domains.
Author Information
Hao He (MIT, CSAIL)
Bo Xin (Microsoft Research)
Satoshi Ikehata (National Institute of Informatics)
David Wipf (Microsoft Research)
Related Events (a corresponding poster, oral, or spotlight)
-
2017 Poster: From Bayesian Sparsity to Gated Recurrent Nets »
Thu. Dec 7th 02:30 -- 06:30 AM Room Pacific Ballroom #48
More from the Same Authors
-
2021 Spotlight: On the Value of Infinite Gradients in Variational Autoencoder Models »
Bin Dai · Li Wenliang · David Wipf -
2022 : Contactless Oxygen Monitoring with Gated Transformer »
Hao He · Yuan Yuan · Yingcong Chen · Peng Cao · Dina Katabi -
2021 Poster: A Biased Graph Neural Network Sampler with Near-Optimal Regret »
Qingru Zhang · David Wipf · Quan Gan · Le Song -
2021 Poster: GRIN: Generative Relation and Intention Network for Multi-agent Trajectory Prediction »
Longyuan Li · Jian Yao · Li Wenliang · Tong He · Tianjun Xiao · Junchi Yan · David Wipf · Zheng Zhang -
2021 Poster: From Canonical Correlation Analysis to Self-supervised Graph Neural Networks »
Hengrui Zhang · Qitian Wu · Junchi Yan · David Wipf · Philip S Yu -
2021 Poster: On the Value of Infinite Gradients in Variational Autoencoder Models »
Bin Dai · Li Wenliang · David Wipf -
2019 : Poster Session 1 »
Hongzi Mao · Vikram Nathan · Ioana Baldini · Viswanath Sivakumar · Haonan Wang · Vinoj Yasanga Jayasundara Magalle Hewa · Zhan Shi · Samuel Kaufman · Joyce Fang · Giulio Zhou · Jialin Ding · Hao He · Miles Lubin -
2016 Poster: A Pseudo-Bayesian Algorithm for Robust PCA »
Tae-Hyun Oh · Yasuyuki Matsushita · In So Kweon · David Wipf -
2016 Poster: Maximal Sparsity with Deep Networks? »
Bo Xin · Yizhou Wang · Wen Gao · David Wipf · Baoyuan Wang -
2013 Poster: Non-Uniform Camera Shake Removal Using a Spatially-Adaptive Sparse Penalty »
Haichao Zhang · David Wipf -
2013 Oral: Non-Uniform Camera Shake Removal Using a Spatially-Adaptive Sparse Penalty »
Haichao Zhang · David Wipf