Skip to yearly menu bar Skip to main content


Poster

Learning with Incremental Iterative Regularization

Lorenzo Rosasco · Silvia Villa

210 C #80

Abstract:

Within a statistical learning setting, we propose and study an iterative regularization algorithm for least squares defined by an incremental gradient method. In particular, we show that, if all other parameters are fixed a priori, the number of passes over the data (epochs) acts as a regularization parameter, and prove strong universal consistency, i.e. almost sure convergence of the risk, as well as sharp finite sample bounds for the iterates. Our results are a step towards understanding the effect of multiple epochs in stochastic gradient techniques in machine learning and rely on integrating statistical and optimizationresults.

Live content is unavailable. Log in and register to view live content