Skip to yearly menu bar Skip to main content


Poster

Why neural networks find simple solutions: The many regularizers of geometric complexity

Benoit Dherin · Michael Munn · Mihaela Rosca · David Barrett

Hall J (level 1) #432

Keywords: [ Regularization ] [ Deep Learning Theory ] [ Neural Networks ] [ Complexity ] [ Double-Descent ] [ Smoothness ] [ implicit regularization ] [ Theory ] [ Deep Learning ]


Abstract:

In many contexts, simpler models are preferable to more complex models and the control of this model complexity is the goal for many methods in machine learning such as regularization, hyperparameter tuning and architecture design. In deep learning, it has been difficult to understand the underlying mechanisms of complexity control, since many traditional measures are not naturally suitable for deep neural networks. Here we develop the notion of geometric complexity, which is a measure of the variability of the model function, computed using a discrete Dirichlet energy. Using a combination of theoretical arguments and empirical results, we show that many common training heuristics such as parameter norm regularization, spectral norm regularization, flatness regularization, implicit gradient regularization, noise regularization and the choice of parameter initialization all act to control geometric complexity, providing a unifying framework in which to characterize the behavior of deep learning models.

Chat is not available.