Skip to yearly menu bar Skip to main content


Poster

Accelerating Stochastic Composition Optimization

Mengdi Wang · Ji Liu · Ethan Fang

Area 5+6+7+8 #167

Keywords: [ Reinforcement Learning Algorithms ] [ Convex Optimization ]


Abstract:

Consider the stochastic composition optimization problem where the objective is a composition of two expected-value functions. We propose a new stochastic first-order method, namely the accelerated stochastic compositional proximal gradient (ASC-PG) method, which updates based on queries to the sampling oracle using two different timescales. The ASC-PG is the first proximal gradient method for the stochastic composition problem that can deal with nonsmooth regularization penalty. We show that the ASC-PG exhibits faster convergence than the best known algorithms, and that it achieves the optimal sample-error complexity in several important special cases. We further demonstrate the application of ASC-PG to reinforcement learning and conduct numerical experiments.

Live content is unavailable. Log in and register to view live content