Skip to yearly menu bar Skip to main content

Workshop: OPT 2023: Optimization for Machine Learning

Non-Uniform Sampling and Adaptive Optimizers in Deep Learning

Thibault Lahire


Stochastic gradient descent samples uniformly the training set to build an unbiased gradient estimate with a limited number of samples. However, at a given step of the training process, some data are more helpful than others to continue learning. Importance sampling for training deep neural networks has been widely studied to propose sampling schemes yielding better performance than the uniform sampling scheme. After recalling the theory of importance sampling for deep learning, this paper focuses on the interplay between the sampling scheme and the optimizer used. We show that the sampling proportional to the per-sample gradient norms is not optimal for adaptive optimizers, although it is the case for stochastic gradient descent in its standard form. This implies that new sampling schemes have to be designed with respect to the optimizer used. Thus, using approximations of the per-sample gradient norms scheme with adaptive optimizers is likely to yield unsatisfying results.

Chat is not available.