Timezone: »
We consider the Principal Component Analysis problem for large tensors of arbitrary order k under a single-spike (or rank-one plus noise) model. On the one hand, we use information theory, and recent results in probability theory to establish necessary and sufficient conditions under which the principal component can be estimated using unbounded computational resources. It turns out that this is possible as soon as the signal-to-noise ratio beta becomes larger than C\sqrt{k log k} (and in particular beta can remain bounded has the problem dimensions increase). On the other hand, we analyze several polynomial-time estimation algorithms, based on tensor unfolding, power iteration and message passing ideas from graphical models. We show that, unless the signal-to-noise ratio diverges in the system dimensions, none of these approaches succeeds. This is possibly related to a fundamental limitation of computationally tractable estimators for this problem. For moderate dimensions, we propose an hybrid approach that uses unfolding together with power iteration, and show that it outperforms significantly baseline methods. Finally, we consider the case in which additional side information is available about the unknown signal. We characterize the amount of side information that allow the iterative algorithms to converge to a good estimate.
Author Information
Emile Richard (Amazon)
Andrea Montanari (Stanford)
More from the Same Authors
-
2021 Poster: Streaming Belief Propagation for Community Detection »
Yuchen Wu · Jakab Tardos · Mohammadhossein Bateni · André Linhares · Filipe Miguel Goncalves de Almeida · Andrea Montanari · Ashkan Norouzi-Fard -
2020 Poster: When Do Neural Networks Outperform Kernel Methods? »
Behrooz Ghorbani · Song Mei · Theodor Misiakiewicz · Andrea Montanari -
2019 Poster: Limitations of Lazy Training of Two-layers Neural Network »
Behrooz Ghorbani · Song Mei · Theodor Misiakiewicz · Andrea Montanari -
2019 Spotlight: Limitations of Lazy Training of Two-layers Neural Network »
Behrooz Ghorbani · Song Mei · Theodor Misiakiewicz · Andrea Montanari -
2018 Poster: Contextual Stochastic Block Models »
Yash Deshpande · Subhabrata Sen · Andrea Montanari · Elchanan Mossel -
2018 Spotlight: Contextual Stochastic Block Models »
Yash Deshpande · Subhabrata Sen · Andrea Montanari · Elchanan Mossel -
2017 Poster: Inference in Graphical Models via Semidefinite Programming Hierarchies »
Murat Erdogdu · Yash Deshpande · Andrea Montanari -
2015 : Information-theoretic bounds on learning network dynamics »
Andrea Montanari -
2015 Poster: Convergence rates of sub-sampled Newton methods »
Murat Erdogdu · Andrea Montanari -
2015 Poster: On the Limitation of Spectral Methods: From the Gaussian Hidden Clique Problem to Rank-One Perturbations of Gaussian Tensors »
Andrea Montanari · Daniel Reichman · Ofer Zeitouni -
2015 Poster: Recognizing retinal ganglion cells in the dark »
Emile Richard · Georges A Goetz · E.J. Chichilnisky -
2014 Poster: Cone-Constrained Principal Component Analysis »
Yash Deshpande · Andrea Montanari · Emile Richard -
2014 Poster: Sparse PCA via Covariance Thresholding »
Yash Deshpande · Andrea Montanari -
2014 Poster: Tight convex relaxations for sparse matrix factorization »
Emile Richard · Guillaume R Obozinski · Jean-Philippe Vert -
2013 Poster: Estimating LASSO Risk and Noise Level »
Mohsen Bayati · Murat Erdogdu · Andrea Montanari -
2013 Poster: Confidence Intervals and Hypothesis Testing for High-Dimensional Statistical Models »
Adel Javanmard · Andrea Montanari -
2013 Poster: Model Selection for High-Dimensional Regression under the Generalized Irrepresentability Condition »
Adel Javanmard · Andrea Montanari -
2012 Poster: Link Prediction in Graphs with Autoregressive Features »
Emile Richard · Stephane Gaiffas · Nicolas Vayatis -
2010 Poster: Learning Networks of Stochastic Differential Equations »
José Bento · Morteza Ibrahimi · Andrea Montanari -
2010 Poster: The LASSO risk: asymptotic results and real world examples »
Mohsen Bayati · José Bento · Andrea Montanari -
2010 Poster: Link Discovery using Graph Feature Tracking »
Emile Richard · Nicolas Baskiotis · Theos Evgeniou · Nicolas Vayatis -
2009 Poster: Matrix Completion from Noisy Entries »
Raghunandan Keshavan · Andrea Montanari · Sewoong Oh -
2009 Poster: Which graphical models are difficult to learn? »
Andrea Montanari · José Bento