Timezone: »

Newton-LESS: Sparsification without Trade-offs for the Sketched Newton Update
Michal Derezinski · Jonathan Lacotte · Mert Pilanci · Michael W Mahoney

Thu Dec 09 04:30 PM -- 06:00 PM (PST) @ Virtual #None

In second-order optimization, a potential bottleneck can be computing the Hessian matrix of the optimized function at every iteration. Randomized sketching has emerged as a powerful technique for constructing estimates of the Hessian which can be used to perform approximate Newton steps. This involves multiplication by a random sketching matrix, which introduces a trade-off between the computational cost of sketching and the convergence rate of the optimization. A theoretically desirable but practically much too expensive choice is to use a dense Gaussian sketching matrix, which produces unbiased estimates of the exact Newton step and offers strong problem-independent convergence guarantees. We show that the Gaussian matrix can be drastically sparsified, substantially reducing the computational cost, without affecting its convergence properties in any way. This approach, called Newton-LESS, is based on a recently introduced sketching technique: LEverage Score Sparsified (LESS) embeddings. We prove that Newton-LESS enjoys nearly the same problem-independent local convergence rate as Gaussian embeddings for a large class of functions. In particular, this leads to a new state-of-the-art convergence result for an iterative least squares solver. Finally, we substantially extend LESS embeddings to include uniformly sparsified random sign matrices which can be implemented efficiently and perform well in numerical experiments.

Author Information

Michal Derezinski (University of Michigan)
Jonathan Lacotte (Stanford University)
Mert Pilanci (Stan)
Michael W Mahoney (UC Berkeley)

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors