Timezone: »

 
Acceleration and Stability of the Stochastic Proximal Point Algorithm
Junhyung Lyle Kim · Panos Toulis · Anastasios Kyrillidis
Stochastic gradient descent (SGD) has emerged as the de-facto method for solving (unconstrained) stochastic optimization problems. However, it suffers from two fundamental limitations: $(i)$ slow convergence due to inaccurate gradient approximation, and $(ii)$ numerical instability, especially with respect to step size selection. To improve the slow convergence, accelerated variants such as stochastic gradient descent with momentum (SGDM) have been studied; however, the interference of gradient noise and momentum can aggravate the numerical instability. Proximal point methods, on the other hand, have gained much attention due to their numerical stability. Their stochastic accelerated variants though have received limited attention. To bridge this gap, we propose the stochastic proximal point algorithm with momentum (SPPAM), and study its convergence and stability. We show that SPPAM enjoys a better contraction factor compared to stochastic proximal point algorithm (SPPA), leading to faster convergence. In terms of stability, we show that SPPAM depends on problem constants more favorably than SGDM.

Author Information

Junhyung Lyle Kim (Rice University)
Panos Toulis (Chicago Booth School of Business)
Anastasios Kyrillidis (Rice University)

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors