Timezone: »
We introduce a new scalable approximation for Gaussian processes with provable guarantees which holds simultaneously over its entire parameter space. Our approximation is obtained from an improved sample complexity analysis for sparse spectrum Gaussian processes (SSGPs). In particular, our analysis shows that under a certain data disentangling condition, an SSGP's prediction and model evidence (for training) can well-approximate those of a full GP with low sample complexity. We also develop a new auto-encoding algorithm that finds a latent space to disentangle latent input coordinates into well-separated clusters, which is amenable to our sample complexity analysis. We validate our proposed method on several benchmarks with promising results supporting our theoretical analysis.
Author Information
Minh Hoang (Carnegie Mellon University)
Nghia Hoang (Amazon)
Hai Pham (Carnegie Mellon University)
David Woodruff (Carnegie Mellon University)
More from the Same Authors
-
2021 Spotlight: Optimal Sketching for Trace Estimation »
Shuli Jiang · Hai Pham · David Woodruff · Richard Zhang -
2021 : Personalized Neural Architecture Search for Federated Learning »
Minh Hoang · Carl Kingsford -
2023 Poster: Incentives in Private Collaborative Machine Learning »
Rachael Sim · Yehong Zhang · Nghia Hoang · Xinyi Xu · Bryan Kian Hsiang Low · Patrick Jaillet -
2021 : Contributed Talk 1: Personalized Neural Architecture Search for Federated Learning »
Minh Hoang · Carl Kingsford -
2021 Workshop: New Frontiers in Federated Learning: Privacy, Fairness, Robustness, Personalization and Data Ownership »
Nghia Hoang · Lam Nguyen · Pin-Yu Chen · Tsui-Wei Weng · Sara Magliacane · Bryan Kian Hsiang Low · Anoop Deoras -
2021 Poster: Linear and Kernel Classification in the Streaming Model: Improved Bounds for Heavy Hitters »
Arvind Mahankali · David Woodruff -
2021 Poster: Optimal Sketching for Trace Estimation »
Shuli Jiang · Hai Pham · David Woodruff · Richard Zhang -
2021 Poster: Few-Shot Data-Driven Algorithms for Low Rank Approximation »
Piotr Indyk · Tal Wagner · David Woodruff -
2017 Poster: Approximation Algorithms for $\ell_0$-Low Rank Approximation »
Karl Bringmann · Pavel Kolev · David Woodruff -
2017 Poster: Near Optimal Sketching of Low-Rank Tensor Regression »
Xingguo Li · Jarvis Haupt · David Woodruff -
2017 Poster: Is Input Sparsity Time Possible for Kernel Low-Rank Approximation? »
Cameron Musco · David Woodruff