Timezone: »
The multiplicative structure of parameters and input data in the first layer of neural networks is explored to build connection between the landscape of the loss function with respect to parameters and the landscape of the model function with respect to input data. By this connection, it is shown that flat minima regularize the gradient of the model function, which explains the good generalization performance of flat minima. Then, we go beyond the flatness and consider high-order moments of the gradient noise, and show that Stochastic Gradient Dascent (SGD) tends to impose constraints on these moments by a linear stability analysis of SGD around global minima. Together with the multiplicative structure, we identify the Sobolev regularization effect of SGD, i.e. SGD regularizes the Sobolev seminorms of the model function with respect to the input data. Finally, bounds for generalization error and adversarial robustness are provided for solutions found by SGD under assumptions of the data distribution.
Author Information
Chao Ma (Stanford University)
Lexing Ying (Stanford University)
Related Events (a corresponding poster, oral, or spotlight)
-
2021 Spotlight: On Linear Stability of SGD and Input-Smoothness of Neural Networks »
Dates n/a. Room
More from the Same Authors
-
2022 : Minimax Optimal Kernel Operator Learning via Multilevel Training »
Jikai Jin · Yiping Lu · Jose Blanchet · Lexing Ying -
2022 : Synthetic Principle Component Design: Fast Covariate Balancing with Synthetic Controls »
Yiping Lu · Jiajin Li · Lexing Ying · Jose Blanchet -
2023 Poster: When can Regression-Adjusted Control Variate Help? Rare Events, Sobolev Embedding and Minimax Optimality »
Jose Blanchet · Haoxuan Chen · Yiping Lu · Lexing Ying -
2022 Poster: Sobolev Acceleration and Statistical Optimality for Learning Elliptic Equations via Gradient Descent »
Yiping Lu · Jose Blanchet · Lexing Ying -
2021 : Statistical Numerical PDE : Fast Rate, Neural Scaling Law and When it’s Optimal »
Yiping Lu · Haoxuan Chen · Jianfeng Lu · Lexing Ying · Jose Blanchet -
2019 Poster: Global Convergence of Gradient Descent for Deep Linear Residual Networks »
Lei Wu · Qingcan Wang · Chao Ma -
2018 Poster: How SGD Selects the Global Minima in Over-parameterized Learning: A Dynamical Stability Perspective »
Lei Wu · Chao Ma · Weinan E