Timezone: »
Poster
Optimal Sketching for Trace Estimation
Shuli Jiang · Hai Pham · David Woodruff · Richard Zhang
Matrix trace estimation is ubiquitous in machine learning applications and has traditionally relied on Hutchinson's method, which requires $O(\log(1/\delta)/\epsilon^2)$ matrix-vector product queries to achieve a $(1 \pm \epsilon)$-multiplicative approximation to $\text{trace}(A)$ with failure probability $\delta$ on positive-semidefinite input matrices $A$. Recently, the Hutch++ algorithm was proposed, which reduces the number of matrix-vector queries from $O(1/\epsilon^2)$ to the optimal $O(1/\epsilon)$, and the algorithm succeeds with constant probability. However, in the high probability setting, the non-adaptive Hutch++ algorithm suffers an extra $O(\sqrt{\log(1/\delta)})$ multiplicative factor in its query complexity. Non-adaptive methods are important, as they correspond to sketching algorithms, which are mergeable, highly parallelizable, and provide low-memory streaming algorithms as well as low-communication distributed protocols. In this work, we close the gap between non-adaptive and adaptive algorithms, showing that even non-adaptive algorithms can achieve $O(\sqrt{\log(1/\delta)}/\epsilon + \log(1/\delta))$ matrix-vector products. In addition, we prove matching lower bounds demonstrating that, up to a $\log \log(1/\delta)$ factor, no further improvement in the dependence on $\delta$ or $\epsilon$ is possible by any non-adaptive algorithm. Finally, our experiments demonstrate the superior performance of our sketch over the adaptive Hutch++ algorithm, which is less parallelizable, as well as over the non-adaptive Hutchinson's method.
Author Information
Shuli Jiang (Carnegie Mellon University)
Hai Pham (Carnegie Mellon University)
David Woodruff (Carnegie Mellon University)
Richard Zhang (Google Brain)
Related Events (a corresponding poster, oral, or spotlight)
-
2021 Spotlight: Optimal Sketching for Trace Estimation »
Dates n/a. Room
More from the Same Authors
-
2022 Spotlight: Optimal Query Complexities for Dynamic Trace Estimation »
David Woodruff · Fred Zhang · Richard Zhang -
2022 Poster: Optimal Query Complexities for Dynamic Trace Estimation »
David Woodruff · Fred Zhang · Richard Zhang -
2022 Poster: Towards Learning Universal Hyperparameter Optimizers with Transformers »
Yutian Chen · Xingyou Song · Chansoo Lee · Zi Wang · Richard Zhang · David Dohan · Kazuya Kawakami · Greg Kochanski · Arnaud Doucet · Marc'Aurelio Ranzato · Sagi Perel · Nando de Freitas -
2021 Poster: Linear and Kernel Classification in the Streaming Model: Improved Bounds for Heavy Hitters »
Arvind Mahankali · David Woodruff -
2021 Poster: Few-Shot Data-Driven Algorithms for Low Rank Approximation »
Piotr Indyk · Tal Wagner · David Woodruff -
2020 Poster: Revisiting the Sample Complexity of Sparse Spectrum Approximation of Gaussian Processes »
Minh Hoang · Nghia Hoang · Hai Pham · David Woodruff -
2019 Poster: Regularized Weighted Low Rank Approximation »
Frank Ban · David Woodruff · Richard Zhang -
2017 Poster: Approximation Algorithms for $\ell_0$-Low Rank Approximation »
Karl Bringmann · Pavel Kolev · David Woodruff -
2017 Poster: Near Optimal Sketching of Low-Rank Tensor Regression »
Xingguo Li · Jarvis Haupt · David Woodruff -
2017 Poster: Is Input Sparsity Time Possible for Kernel Low-Rank Approximation? »
Cameron Musco · David Woodruff