Timezone: »

 
Optimization using Parallel Gradient Evaluations on Multiple Parameters
Yash Chandak · Shiv Shankar · Venkata Gandikota · Philip Thomas · Arya Mazumdar
Event URL: https://openreview.net/forum?id=weWbPUIMSq »

We propose a first-order method for convex optimization, where instead of being restricted to the gradient from a single parameter, gradients from multiple parameters can be used during each step of gradient descent. This setup is particularly useful when a few processors are available that can be used in parallel for optimization. Our method uses gradients from multiple parameters in synergy to update these parameters together towards the optima. While doing so, it is ensured that the computational and memory complexity is of the same order as that of gradient descent. Empirical results demonstrate that even using gradients from as low as \textit{two} parameters, our method can often obtain significant acceleration and provide robustness to hyper-parameter settings. We remark that the primary goal of this work is less theoretical, and is instead aimed at exploring the understudied case of using multiple gradients during each step of optimization.

Author Information

Yash Chandak (University of Massachusetts Amherst)
Shiv Shankar (IIT Bombay)
Venkata Gandikota (Syracuse University)
Philip Thomas (University of Massachusetts Amherst)
Arya Mazumdar (University of California, San Diego)

More from the Same Authors