Contributed Talks 2: Data Generation without Function Estimation & Flat Minima and Generalization Insights from Stochastic Convex Optimization
Abstract
Data Generation without Function Estimation, Speaker: Hadi Daneshmand
Abstract: Estimating the score function—or other population-density-dependent functions—is a fundamental component of most generative models. However, such function estimation is computationally and statistically challenging. Can we avoid function estimation for data generation? We propose an estimation-free generative method: A set of points whose locations are deterministically updated with (inverse) gradient descent can transport a uniform distribution to arbitrary data distribution, in the mean field regime, without function estimation, training neural networks, and even noise injection. The proposed method is built upon recent advances in the physics of interacting particles. Leveraging recent advances in mathematical physics, we prove that the proposed method samples from the true underlying data distribution in the asymptotic regime, without making any structural assumptions on the distribution.
Flat Minima and Generalization: Insights from Stochastic Convex Optimization, Speaker: Shira Vansover-Hager
Abstract: Understanding the generalization behavior of learning algorithms is a central goal of learning theory. A recently emerging explanation is that learning algorithms are successful in practice because they converge to flat minima, which have been consistently associated with improved generalization performance. In this work, we study the link between flat minima and generalization in the canonical setting of stochastic convex optimization with a non-negative, \beta-smooth objective. Our first finding is that, even in this fundamental and well-studied setting, flat empirical minima may incur trivial \Omega(1) population risk while sharp minima generalizes optimally. We then analyze two natural first-order methods, originally proposed by Foret et al. (2021), designed to gear convergence towards flat minima. For Sharpness-Aware Gradient Descent (SA-GD), which performs gradient steps on the maximal loss in a predefined neighborhood, we prove that while it successfully converges to a flat minimum at a fast rate, the population risk of the solution can still be as large as~\Omega(1). For Sharpness-Aware Minimization (SAM), a computationally efficient approximation of SA-GD based on normalized ascent steps, we show that although it minimizes the empirical loss, it may converge to a sharp minimum and also incur population risk \Omega(1). Finally, we establish population risk upper bounds for both SA-GD and SAM using algorithmic stability techniques.