Timezone: »
Random projections or sketching are widely used in many algorithmic and learning contexts. Here we study the performance of iterative Hessian sketch for least-squares problems. By leveraging and extending recent results from random matrix theory on the limiting spectrum of matrices randomly projected with the subsampled randomized Hadamard transform, and truncated Haar matrices, we can study and compare the resulting algorithms to a level of precision that has not been possible before. Our technical contributions include a novel formula for the second moment of the inverse of projected matrices. We also find simple closed-form expressions for asymptotically optimal step-sizes and convergence rates. These show that the convergence rate for Haar and randomized Hadamard matrices are identical, and asymptotically improve upon Gaussian random projections. These techniques may be applied to other algorithms that employ randomized dimension reduction.
Author Information
Jonathan Lacotte (Stanford University)
Sifan Liu (Stanford University)
Edgar Dobriban (University of Pennsylvania)
Mert Pilanci (Stanford)
More from the Same Authors
-
2021 Spotlight: Newton-LESS: Sparsification without Trade-offs for the Sketched Newton Update »
Michal Derezinski · Jonathan Lacotte · Mert Pilanci · Michael Mahoney -
2022 Poster: Fair Bayes-Optimal Classifiers Under Predictive Parity »
Xianli Zeng · Edgar Dobriban · Guang Cheng -
2023 Poster: Langevin Quasi-Monte Carlo »
Sifan Liu -
2022 Poster: PAC Prediction Sets for Meta-Learning »
Sangdon Park · Edgar Dobriban · Insup Lee · Osbert Bastani -
2022 Poster: Collaborative Learning of Discrete Distributions under Heterogeneity and Communication Constraints »
Xinmeng Huang · Donghwan Lee · Edgar Dobriban · Hamed Hassani -
2021 Poster: Newton-LESS: Sparsification without Trade-offs for the Sketched Newton Update »
Michal Derezinski · Jonathan Lacotte · Mert Pilanci · Michael Mahoney -
2020 Poster: Debiasing Distributed Second Order Optimization with Surrogate Sketching and Scaled Regularization »
Michal Derezinski · Burak Bartan · Mert Pilanci · Michael Mahoney -
2020 Poster: Effective Dimension Adaptive Sketching Methods for Faster Regularized Least-Squares Optimization »
Jonathan Lacotte · Mert Pilanci -
2020 Oral: Effective Dimension Adaptive Sketching Methods for Faster Regularized Least-Squares Optimization »
Jonathan Lacotte · Mert Pilanci -
2020 Poster: Implicit Regularization and Convergence for Weight Normalization »
Xiaoxia Wu · Edgar Dobriban · Tongzheng Ren · Shanshan Wu · Zhiyuan Li · Suriya Gunasekar · Rachel Ward · Qiang Liu -
2020 Poster: A Group-Theoretic Framework for Data Augmentation »
Shuxiao Chen · Edgar Dobriban · Jane Lee -
2020 Oral: A Group-Theoretic Framework for Data Augmentation »
Shuxiao Chen · Edgar Dobriban · Jane Lee -
2019 Poster: High-Dimensional Optimization in Adaptive Random Subspaces »
Jonathan Lacotte · Mert Pilanci · Marco Pavone -
2019 Poster: Asymptotics for Sketching in Least Squares Regression »
Edgar Dobriban · Sifan Liu