Timezone: »
Stochastic gradient descent is popular for large scale optimization but has slow convergence asymptotically due to the inherent variance. To remedy this problem, we introduce an explicit variance reduction method for stochastic gradient descent which we call stochastic variance reduced gradient (SVRG). For smooth and strongly convex functions, we prove that this method enjoys the same fast convergence rate as those of stochastic dual coordinate ascent (SDCA) and Stochastic Average Gradient (SAG). However, our analysis is significantly simpler and more intuitive. Moreover, unlike SDCA or SAG, our method does not require the storage of gradients, and thus is more easily applicable to complex problems such as some structured prediction problems and neural network learning.
Author Information
Rie Johnson (RJ Research Consuulting)
Tong Zhang (The Hong Kong University of Science and Technology)
More from the Same Authors
-
2016 Poster: Exact Recovery of Hard Thresholding Pursuit »
Xiaotong Yuan · Ping Li · Tong Zhang -
2016 Poster: Learning Additive Exponential Family Graphical Models via $\ell_{2,1}$-norm Regularized M-Estimation »
Xiaotong Yuan · Ping Li · Tong Zhang · Qingshan Liu · Guangcan Liu -
2015 Poster: Quartz: Randomized Dual Coordinate Ascent with Arbitrary Sampling »
Zheng Qu · Peter Richtarik · Tong Zhang -
2015 Poster: Local Smoothness in Variance Reduced Optimization »
Daniel Vainsencher · Han Liu · Tong Zhang -
2015 Poster: Semi-supervised Convolutional Neural Networks for Text Categorization via Region Embedding »
Rie Johnson · Tong Zhang -
2015 Spotlight: Semi-supervised Convolutional Neural Networks for Text Categorization via Region Embedding »
Rie Johnson · Tong Zhang -
2013 Poster: Accelerated Mini-Batch Stochastic Dual Coordinate Ascent »
Shai Shalev-Shwartz · Tong Zhang -
2012 Workshop: Modern Nonparametric Methods in Machine Learning »
Sivaraman Balakrishnan · Arthur Gretton · Mladen Kolar · John Lafferty · Han Liu · Tong Zhang -
2012 Poster: Selective Labeling via Error Bound Minimization »
Quanquan Gu · Tong Zhang · Chris Ding · Jiawei Han -
2011 Poster: Learning to Search Efficiently in High Dimensions »
Zhen Li · Huazhong Ning · Liangliang Cao · Tong Zhang · Yihong Gong · Thomas S Huang -
2011 Poster: Spectral Methods for Learning Multivariate Latent Tree Structure »
Anima Anandkumar · Kamalika Chaudhuri · Daniel Hsu · Sham M Kakade · Le Song · Tong Zhang -
2011 Poster: Greedy Model Averaging »
Dong Dai · Tong Zhang -
2010 Poster: Deep Coding Network »
Yuanqing Lin · Tong Zhang · Shenghuo Zhu · Kai Yu -
2010 Poster: Agnostic Active Learning Without Constraints »
Alina Beygelzimer · Daniel Hsu · John Langford · Tong Zhang -
2009 Poster: Multi-Label Prediction via Compressed Sensing »
Daniel Hsu · Sham M Kakade · John Langford · Tong Zhang -
2009 Poster: Nonlinear Learning using Local Coordinate Coding »
Kai Yu · Tong Zhang · Yihong Gong -
2009 Oral: Multi-Label Prediction via Compressed Sensing »
Daniel Hsu · Sham M Kakade · John Langford · Tong Zhang -
2008 Poster: Adaptive Forward-Backward Greedy Algorithm for Sparse Learning with Linear Models »
Tong Zhang -
2008 Oral: Adaptive Forward-Backward Greedy Algorithm for Sparse Learning with Linear Models »
Tong Zhang -
2008 Poster: Sparse Online Learning via Truncated Gradient »
John Langford · Lihong Li · Tong Zhang -
2008 Spotlight: Sparse Online Learning via Truncated Gradient »
John Langford · Lihong Li · Tong Zhang -
2008 Poster: Multi-stage Convex Relaxation for Learning with Sparse Regularization »
Tong Zhang -
2007 Poster: A General Boosting Method and its Application to Learning Ranking Functions for Web Search »
Zhaohui Zheng · Hongyuan Zha · Tong Zhang · Olivier Chapelle · Keke Chen · Gordon Sun -
2007 Poster: The Epoch-Greedy Algorithm for Multi-armed Bandits with Side Information »
John Langford · Tong Zhang -
2006 Poster: Learning on Graph with Laplacian Regularization »
Rie Ando · Tong Zhang