Timezone: »
Poster
The All-or-Nothing Phenomenon in Sparse Tensor PCA
Jonathan Niles-Weed · Ilias Zadik
We study the statistical problem of estimating a rank-one sparse tensor corrupted by additive gaussian noise, a Gaussian additive model also known as sparse tensor PCA. We show that for Bernoulli and Bernoulli-Rademacher distributed signals and \emph{for all} sparsity levels which are sublinear in the dimension of the signal, the sparse tensor PCA model exhibits a phase transition called the \emph{all-or-nothing phenomenon}. This is the property that for some signal-to-noise ratio (SNR) $\mathrm{SNR_c}$ and any fixed $\epsilon>0$, if the SNR of the model is below $\left(1-\epsilon\right)\mathrm{SNR_c}$, then it is impossible to achieve any arbitrarily small constant correlation with the hidden signal, while if the SNR is above $\left(1+\epsilon \right)\mathrm{SNR_c}$, then it is possible to achieve almost perfect correlation with the hidden signal. The all-or-nothing phenomenon was initially established in the context of sparse linear regression, and over the last year also in the context of sparse 2-tensor (matrix) PCA and Bernoulli group testing. Our results follow from a more general result showing that for any Gaussian additive model with a discrete uniform prior, the all-or-nothing phenomenon follows as a direct outcome of an appropriately defined ``near-orthogonality" property of the support of the prior distribution.
Author Information
Jonathan Niles-Weed (NYU)
Ilias Zadik (NYU)
More from the Same Authors
-
2021 : Sinkhorn EM: An Expectation-Maximization algorithm based on entropic optimal transport »
Gonzalo Mena · Amin Nejatbakhsh · Erdem Varol · Jonathan Niles-Weed -
2021 : Sinkhorn EM: An Expectation-Maximizationalgorithm based on entropic optimal transport »
Gonzalo Mena · Amin Nejatbakhsh · Erdem Varol · Jonathan Niles-Weed -
2021 : Entropic estimation of optimal transport maps »
Aram-Alexandre Pooladian · Jonathan Niles-Weed -
2021 : Entropic estimation of optimal transport maps »
Aram-Alexandre Pooladian · Jonathan Niles-Weed -
2022 Panel: Panel 3B-4: Learning and Covering… & Asymptotics of smoothed… »
Jonathan Niles-Weed · Konstantinos Stavropoulos -
2022 Poster: Distributional Convergence of the Sliced Wasserstein Process »
Jiaqi Xi · Jonathan Niles-Weed -
2022 Poster: Asymptotics of smoothed Wasserstein distances in the small noise regime »
Yunzi Ding · Jonathan Niles-Weed -
2020 Poster: Early-Learning Regularization Prevents Memorization of Noisy Labels »
Sheng Liu · Jonathan Niles-Weed · Narges Razavian · Carlos Fernandez-Granda -
2020 Poster: Optimal Private Median Estimation under Minimal Distributional Assumptions »
Christos Tzamos · Emmanouil-Vasileios Vlatakis-Gkaragkounis · Ilias Zadik -
2020 Spotlight: Optimal Private Median Estimation under Minimal Distributional Assumptions »
Christos Tzamos · Emmanouil-Vasileios Vlatakis-Gkaragkounis · Ilias Zadik