Timezone: »
Understanding the training dynamics of deep learning models is perhaps a necessary step toward demystifying the effectiveness of these models. In particular, how do training data from different classes gradually become separable in their feature spaces when training neural networks using stochastic gradient descent? In this paper, we model the evolution of features during deep learning training using a set of stochastic differential equations (SDEs) that each corresponding to a training sample. As a crucial ingredient in our modeling strategy, each SDE contains a drift term that reflects the impact of backpropagation at an input on the features of all samples. Our main finding uncovers a sharp phase transition phenomenon regarding the intra-class impact: if the SDEs are locally elastic in the sense that the impact is more significant on samples from the same class as the input, the features of training data become linearly separable---meaning vanishing training loss; otherwise, the features are not separable, no matter how long the training time is. In the presence of local elasticity, moreover, an analysis of our SDEs shows the emergence of a simple geometric structure called neural collapse of the features. Taken together, our results shed light on the decisive role of local elasticity underlying the training dynamics of neural networks. We corroborate our theoretical analysis with experiments on a synthesized dataset of geometric shapes as well as on CIFAR-10.
Author Information
Jiayao Zhang (University of Pennsylvania)
Hua Wang (Wharton School, University of Pennsylvania)
Weijie Su (Computer and Information Science and Wharton, University of Pennsylvania)
More from the Same Authors
-
2021 Spotlight: A Central Limit Theorem for Differentially Private Query Answering »
Jinshuo Dong · Weijie Su · Linjun Zhang -
2022 : Towards Reverse Causal Inference on Panel Data: Precise Formulation and Challenges »
Jiayao Zhang · Youngsuk Park · Danielle Maddix · Dan Roth · Yuyang (Bernie) Wang -
2022 Poster: The alignment property of SGD noise and how it helps select flat minima: A stability analysis »
Lei Wu · Mingze Wang · Weijie Su -
2021 Poster: A Central Limit Theorem for Differentially Private Query Answering »
Jinshuo Dong · Weijie Su · Linjun Zhang -
2021 Poster: You Are the Best Reviewer of Your Own Papers: An Owner-Assisted Scoring Mechanism »
Weijie Su -
2020 Poster: Label-Aware Neural Tangent Kernel: Toward Better Generalization and Local Elasticity »
Shuxiao Chen · Hangfeng He · Weijie Su -
2020 Poster: The Complete Lasso Tradeoff Diagram »
Hua Wang · Yachong Yang · Zhiqi Bu · Weijie Su -
2020 Spotlight: The Complete Lasso Tradeoff Diagram »
Hua Wang · Yachong Yang · Zhiqi Bu · Weijie Su -
2019 Poster: Algorithmic Analysis and Statistical Estimation of SLOPE via Approximate Message Passing »
Zhiqi Bu · Jason Klusowski · Cynthia Rush · Weijie Su -
2019 Poster: Acceleration via Symplectic Discretization of High-Resolution Differential Equations »
Bin Shi · Simon Du · Weijie Su · Michael Jordan