Timezone: »
Spotlight
Limitations on VarianceReduction and Acceleration Schemes for Finite Sums Optimization
Yossi Arjevani
We study the conditions under which one is able to efficiently apply variancereduction and acceleration schemes on finite sums problems. First, we show that perhaps surprisingly, the finite sum structure, by itself, is not sufficient for obtaining a complexity bound of $\tilde{\cO}((n+L/\mu)\ln(1/\epsilon))$ for $L$smooth and $\mu$strongly convex finite sums  one must also know exactly which individual function is being referred to by the oracle at each iteration. Next, we show that for a broad class of firstorder and coordinatedescent finite sums algorithms (including, e.g., SDCA, SVRG, SAG), it is not possible to get an `accelerated' complexity bound of $\tilde{\cO}((n+\sqrt{n L/\mu})\ln(1/\epsilon))$, unless the strong convexity parameter is given explicitly. Lastly, we show that when this class of algorithms is used for minimizing $L$smooth and nonstrongly convex finite sums, the optimal complexity bound is $\tilde{\cO}(n+L/\epsilon)$, assuming that (on average) the same update rule is used for any iteration, and $\tilde{\cO}(n+\sqrt{nL/\epsilon})$, otherwise.
Author Information
Yossi Arjevani (The Weizmann Institute)
Related Events (a corresponding poster, oral, or spotlight)

2017 Poster: Limitations on VarianceReduction and Acceleration Schemes for Finite Sums Optimization »
Wed Dec 6th 02:30  06:30 AM Room Pacific Ballroom #173
More from the Same Authors

2016 Poster: DimensionFree Iteration Complexity of Finite Sum Optimization Problems »
Yossi Arjevani · Ohad Shamir 
2015 Poster: Communication Complexity of Distributed Convex Learning and Optimization »
Yossi Arjevani · Ohad Shamir