Skip to yearly menu bar Skip to main content


Tutorial

Reconsidering Overfitting in the Age of Overparameterized Models

Spencer Frei · Vidya Muthukumar · Fanny Yang · Arash Amini · Kamalika Chaudhuri · Daniel Hsu · Nati Srebro · Chiyuan Zhang

Hall D2 (level 1)
[ ] [ Project Page ]
[ Slides
Mon 11 Dec 11:45 a.m. PST — 2:15 p.m. PST

Abstract:

Large, overparameterized models such as neural networks are now the workhorses of modern machine learning. These models are often trained to near-zero error on noisy datasets and simultaneously generalize well to unseen data, in contrast to the textbook intuition regarding the perils of overfitting. At the same time, near-perfect data-fitting can have severe issues in the context of robustness, privacy, and fairness. Classical theoretical frameworks provide little guidance for navigating these questions due to overparameterization. It is thus crucial to develop new intuition regarding overfitting and generalization that are reflective of these empirical observations. In this tutorial, we discuss recent work in the learning theory literature that provides theoretical insights into these phenomena.

Chat is not available.