`

Timezone: »

 
Spotlight
Safe Adaptive Importance Sampling
Sebastian Stich · Anant Raj · Martin Jaggi

Wed Dec 06 03:35 PM -- 03:40 PM (PST) @ Hall C

Importance sampling has become an indispensable strategy to speed up optimization algorithms for large-scale applications. Improved adaptive variants -- using importance values defined by the complete gradient information which changes during optimization -- enjoy favorable theoretical properties, but are typically computationally infeasible. In this paper we propose an efficient approximation of gradient-based sampling, which is based on safe bounds on the gradient. The proposed sampling distribution is (i) provably the \emph{best sampling} with respect to the given bounds, (ii) always better than uniform sampling and fixed importance sampling and (iii) can efficiently be computed -- in many applications at negligible extra cost. The proposed sampling scheme is generic and can easily be integrated into existing algorithms. In particular, we show that coordinate-descent (CD) and stochastic gradient descent (SGD) can enjoy significant a speed-up under the novel scheme. The proven efficiency of the proposed sampling is verified by extensive numerical testing.

Author Information

Sebastian Stich (EPFL)

Dr. [Sebastian U. Stich](https://sstich.ch/) is a postdoctoral researcher in machine learning at EPFL (Lausanne, Switzerland). Research interests: - *methods for machine learning and statistics*—at the interface of theory and practice - *collaborative learning* (distributed, federated and decentralized methods) - *optimization for machine learning* (adaptive stochastic methods and generalization performance)

Anant Raj (Max Planck Institute for Intelligent Systems)
Martin Jaggi (EPFL)

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors