Timezone: »

Never Go Full Batch (in Stochastic Convex Optimization)
Idan Amir · Yair Carmon · Tomer Koren · Roi Livni

Thu Dec 09 08:30 AM -- 10:00 AM (PST) @
We study the generalization performance of $\text{\emph{full-batch}}$ optimization algorithms for stochastic convex optimization: these are first-order methods that only access the exact gradient of the empirical risk (rather than gradients with respect to individual data points), that include a wide range of algorithms such as gradient descent, mirror descent, and their regularized and/or accelerated variants. We provide a new separation result showing that, while algorithms such as stochastic gradient descent can generalize and optimize the population risk to within $\epsilon$ after $O(1/\epsilon^2)$ iterations, full-batch methods either need at least $\Omega(1/\epsilon^4)$ iterations or exhibit a dimension-dependent sample complexity.

Author Information

Idan Amir (Tel-Aviv University)
Yair Carmon (Stanford University)
Tomer Koren (Tel Aviv University & Google)
Roi Livni (Tel Aviv University)

More from the Same Authors