Timezone: »

Asynchronous stochastic convex optimization: the noise is in the noise and SGD don't care
Sorathan Chaturapruek · John Duchi · Christopher Ré

Thu Dec 10 08:00 AM -- 12:00 PM (PST) @ 210 C #96 #None

We show that asymptotically, completely asynchronous stochastic gradient procedures achieve optimal (even to constant factors) convergence rates for the solution of convex optimization problems under nearly the same conditions required for asymptotic optimality of standard stochastic gradient procedures. Roughly, the noise inherent to the stochastic approximation scheme dominates any noise from asynchrony. We also give empirical evidence demonstrating the strong performance of asynchronous, parallel stochastic optimization schemes, demonstrating that the robustness inherent to stochastic approximation problems allows substantially faster parallel and asynchronous solution methods. In short, we show that for many stochastic approximation problems, as Freddie Mercury sings in Queen's \emph{Bohemian Rhapsody}, ``Nothing really matters.''

Author Information

Tum Chaturapruek (Stanford University)

I am a quantitative researcher at Citadel Securities in Chicago. If you would like to get in touch, please contact me at sorathan@cs.stanford.edu Education: Stanford University Ph.D. in Computer Science, M.S. in Computer Science Advisors: Ramesh Johari, John C. Mitchell Harvey Mudd College B.S. in Mathematics and Computer Science

John Duchi (Stanford)
Chris Ré (Stanford)

More from the Same Authors