Timezone: »

 
Poster
Generalized Leverage Score Sampling for Neural Networks
Jason Lee · Ruoqi Shen · Zhao Song · Mengdi Wang · zheng Yu

Thu Dec 10 09:00 PM -- 11:00 PM (PST) @ Poster Session 6 #1867

Leverage score sampling is a powerful technique that originates from theoretical computer science, which can be used to speed up a large number of fundamental questions, e.g. linear regression, linear programming, semi-definite programming, cutting plane method, graph sparsification, maximum matching and max-flow. Recently, it has been shown that leverage score sampling helps to accelerate kernel methods [Avron, Kapralov, Musco, Musco, Velingker and Zandieh 17]. In this work, we generalize the results in [Avron, Kapralov, Musco, Musco, Velingker and Zandieh 17] to a broader class of kernels. We further bring the leverage score sampling into the field of deep learning theory. 1. We show the connection between the initialization for neural network training and approximating the neural tangent kernel with random features. 2. We prove the equivalence between regularized neural network and neural tangent kernel ridge regression under the initialization of both classical random Gaussian and leverage score sampling.

Author Information

Jason Lee (Princeton University)
Ruoqi Shen (University of Washington)
Zhao Song (IAS/Princeton)
Mengdi Wang (Princeton University)

Mengdi Wang is interested in data-driven stochastic optimization and applications in machine and reinforcement learning. She received her PhD in Electrical Engineering and Computer Science from Massachusetts Institute of Technology in 2013. At MIT, Mengdi was affiliated with the Laboratory for Information and Decision Systems and was advised by Dimitri P. Bertsekas. Mengdi became an assistant professor at Princeton in 2014. She received the Young Researcher Prize in Continuous Optimization of the Mathematical Optimization Society in 2016 (awarded once every three years).

zheng Yu (Princeton University)

More from the Same Authors