Timezone: »
Poster
The price of ignorance: how much does it cost to forget noise structure in low-rank matrix estimation?
Jean Barbier · TianQi Hou · Marco Mondelli · Manuel Saenz
We consider the problem of estimating a rank-$1$ signal corrupted by structured rotationally invariant noise, and address the following question: \emph{how well do inference algorithms perform when the noise statistics is unknown and hence Gaussian noise is assumed?} While the matched Bayes-optimal setting with unstructured noise is well understood, the analysis of this mismatched problem is only at its premises. In this paper, we make a step towards understanding the effect of the strong source of mismatch which is the noise statistics. Our main technical contribution is the rigorous analysis of a Bayes estimator and of an approximate message passing (AMP) algorithm, both of which incorrectly assume a Gaussian setup. The first result exploits the theory of spherical integrals and of low-rank matrix perturbations; the idea behind the second one is to design and analyze an artificial AMP which, by taking advantage of the flexibility in the denoisers, is able to "correct" the mismatch. Armed with these sharp asymptotic characterizations, we unveil a rich and often unexpected phenomenology. For example, despite AMP is in principle designed to efficiently compute the Bayes estimator, the former is \emph{outperformed} by the latter in terms of mean-square error. We show that this performance gap is due to an incorrect estimation of the signal norm. In fact, when the SNR is large enough, the overlaps of the AMP and the Bayes estimator coincide, and they even match those of optimal estimators taking into account the structure of the noise.
Author Information
Jean Barbier (ICTP)
TianQi Hou (Huawei Technologies Ltd.)
Marco Mondelli (IST Austria)
Manuel Saenz (ICTP)
More from the Same Authors
-
2022 : Mean-field analysis for heavy ball methods: Dropout-stability, connectivity, and global convergence »
Diyuan Wu · Vyacheslav Kungurtsev · Marco Mondelli -
2022 : Poster Session 1 »
Andrew Lowy · Thomas Bonnier · Yiling Xie · Guy Kornowski · Simon Schug · Seungyub Han · Nicolas Loizou · xinwei zhang · Laurent Condat · Tabea E. Röber · Si Yi Meng · Marco Mondelli · Runlong Zhou · Eshaan Nichani · Adrian Goldwaser · Rudrajit Das · Kayhan Behdin · Atish Agarwala · Mukul Gagrani · Gary Cheng · Tian Li · Haoran Sun · Hossein Taheri · Allen Liu · Siqi Zhang · Dmitrii Avdiukhin · Bradley Brown · Miaolan Xie · Junhyung Lyle Kim · Sharan Vaswani · Xinmeng Huang · Ganesh Ramachandra Kini · Angela Yuan · Weiqiang Zheng · Jiajin Li -
2022 Poster: Memorization and Optimization in Deep Neural Networks with Minimum Over-parameterization »
Simone Bombari · Mohammad Hossein Amani · Marco Mondelli -
2021 Poster: When Are Solutions Connected in Deep Networks? »
Quynh Nguyen · Pierre Bréchet · Marco Mondelli -
2021 Poster: PCA Initialization for Approximate Message Passing in Rotationally Invariant Models »
Marco Mondelli · Ramji Venkataramanan -
2020 Poster: Global Convergence of Deep Networks with One Wide Layer Followed by Pyramidal Topology »
Quynh Nguyen · Marco Mondelli