Timezone: »
Spotlight
Optimal Sketching for Trace Estimation
Shuli Jiang · Hai Pham · David Woodruff · Richard Zhang
@
Matrix trace estimation is ubiquitous in machine learning applications and has traditionally relied on Hutchinson's method, which requires $O(\log(1/\delta)/\epsilon^2)$ matrixvector product queries to achieve a $(1 \pm \epsilon)$multiplicative approximation to $\text{trace}(A)$ with failure probability $\delta$ on positivesemidefinite input matrices $A$. Recently, the Hutch++ algorithm was proposed, which reduces the number of matrixvector queries from $O(1/\epsilon^2)$ to the optimal $O(1/\epsilon)$, and the algorithm succeeds with constant probability. However, in the high probability setting, the nonadaptive Hutch++ algorithm suffers an extra $O(\sqrt{\log(1/\delta)})$ multiplicative factor in its query complexity. Nonadaptive methods are important, as they correspond to sketching algorithms, which are mergeable, highly parallelizable, and provide lowmemory streaming algorithms as well as lowcommunication distributed protocols. In this work, we close the gap between nonadaptive and adaptive algorithms, showing that even nonadaptive algorithms can achieve $O(\sqrt{\log(1/\delta)}/\epsilon + \log(1/\delta))$ matrixvector products. In addition, we prove matching lower bounds demonstrating that, up to a $\log \log(1/\delta)$ factor, no further improvement in the dependence on $\delta$ or $\epsilon$ is possible by any nonadaptive algorithm. Finally, our experiments demonstrate the superior performance of our sketch over the adaptive Hutch++ algorithm, which is less parallelizable, as well as over the nonadaptive Hutchinson's method.
Author Information
Shuli Jiang (Carnegie Mellon University)
Hai Pham (Carnegie Mellon University)
David Woodruff (Carnegie Mellon University)
Richard Zhang (Google Brain)
Related Events (a corresponding poster, oral, or spotlight)

2021 Poster: Optimal Sketching for Trace Estimation »
Thu. Dec 9th 12:30  02:00 AM Room
More from the Same Authors

2022 Spotlight: Optimal Query Complexities for Dynamic Trace Estimation »
David Woodruff · Fred Zhang · Richard Zhang 
2022 Poster: Optimal Query Complexities for Dynamic Trace Estimation »
David Woodruff · Fred Zhang · Richard Zhang 
2022 Poster: Towards Learning Universal Hyperparameter Optimizers with Transformers »
Yutian Chen · Xingyou Song · Chansoo Lee · Zi Wang · Richard Zhang · David Dohan · Kazuya Kawakami · Greg Kochanski · Arnaud Doucet · Marc'Aurelio Ranzato · Sagi Perel · Nando de Freitas 
2021 Poster: Linear and Kernel Classification in the Streaming Model: Improved Bounds for Heavy Hitters »
Arvind Mahankali · David Woodruff 
2021 Poster: FewShot DataDriven Algorithms for Low Rank Approximation »
Piotr Indyk · Tal Wagner · David Woodruff 
2020 Poster: Revisiting the Sample Complexity of Sparse Spectrum Approximation of Gaussian Processes »
Minh Hoang · Nghia Hoang · Hai Pham · David Woodruff 
2019 Poster: Regularized Weighted Low Rank Approximation »
Frank Ban · David Woodruff · Richard Zhang 
2017 Poster: Approximation Algorithms for $\ell_0$Low Rank Approximation »
Karl Bringmann · Pavel Kolev · David Woodruff 
2017 Poster: Near Optimal Sketching of LowRank Tensor Regression »
Xingguo Li · Jarvis Haupt · David Woodruff 
2017 Poster: Is Input Sparsity Time Possible for Kernel LowRank Approximation? »
Cameron Musco · David Woodruff