Timezone: »

 
Poster
A Generalized Neural Tangent Kernel Analysis for Two-layer Neural Networks
Zixiang Chen · Yuan Cao · Quanquan Gu · Tong Zhang

Thu Dec 10 09:00 PM -- 11:00 PM (PST) @ Poster Session 6 #1877

A recent breakthrough in deep learning theory shows that the training of over-parameterized deep neural networks can be characterized by a kernel function called \textit{neural tangent kernel} (NTK). However, it is known that this type of results does not perfectly match the practice, as NTK-based analysis requires the network weights to stay very close to their initialization throughout training, and cannot handle regularizers or gradient noises. In this paper, we provide a generalized neural tangent kernel analysis and show that noisy gradient descent with weight decay can still exhibit a ``kernel-like'' behavior. This implies that the training loss converges linearly up to a certain accuracy. We also establish a novel generalization error bound for two-layer neural networks trained by noisy gradient descent with weight decay.

Author Information

Zixiang Chen (UCLA)
Yuan Cao (UCLA)
Quanquan Gu (UCLA)
Tong Zhang (The Hong Kong University of Science and Technology)

More from the Same Authors