Skip to yearly menu bar Skip to main content


Poster

Estimating Generalization Performance Along the Trajectory of Proximal SGD in Robust Regression

Kai Tan · Pierre C Bellec


Abstract:

This paper studies the generalization performance of iterates obtained by Gradient Descent (GD), Stochastic Gradient Descent (SGD) and their proximal variants in high-dimensional robust regression problems, where the number of features is comparable to the sample size and errors may be heavy-tailed. We introduce estimators that precisely track the generalization error for iterates along the trajectory of the iterative algorithm. These estimators enjoy consistency under suitable conditions. The results are illustrated through several examples, including Huber regression, pseudo-Huber regression, and their penalized variants with non-smooth regularizer. We provide explicit generalization error estimates for iterates generated from GD and SGD, or from proximal SGD in the presence of a non-smooth regularizer. The proposed risk estimates serve as effective proxies for the actual generalization error, allowing us to determine the optimal stopping iteration that minimizes the generalization error. Extensive simulations confirm the effectiveness of the proposed generalization error estimates.

Live content is unavailable. Log in and register to view live content