Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Has it Trained Yet? A Workshop for Algorithmic Efficiency in Practical Neural Network Training

IMPON: Efficient IMPortance sampling with ONline regression for rapid neural network training

Vignesh Ganapathiraman · Francisco Calderon · Anila Joshi


Abstract:

Modern-day deep learning models are trained efficiently at scale thanks to thewidespread use of stochastic optimizers such as SGD and ADAM. These optimizersupdate the model weights iteratively based on a batch of uniformly sampledtraining data at each iteration. However, it has been previously observedthat the training performance and overall generalization ability of the model can besignificantly improved by selectively sampling training data based on animportance criteria, known as importance sampling. Previous approachesto importance sampling use metrics such as loss, gradient norm etc. to calculatethe importance scores. These methods either attempt to directly compute thesemetric, resulting in increased training time, or aim to approximate thesemetrics using an analytical proxy, which typically have inferior trainingperformance. In this work, we propose a new sampling strategy calledIMPON, which computes importance scores based on an auxiliarylinear model that regresses the loss of the original deep model, given thecurrent training context, with minimal additional computational cost.Experimental results show that IMPON is able to achieve a significantly hightest accuracy, much faster than prior approaches.

Chat is not available.