Skip to yearly menu bar Skip to main content


Poster

A Multi-Batch L-BFGS Method for Machine Learning

Albert Berahas · Jorge Nocedal · Martin Takac

Area 5+6+7+8 #13

Keywords: [ (Other) Optimization ] [ Convex Optimization ] [ Large Scale Learning and Big Data ]


Abstract:

The question of how to parallelize the stochastic gradient descent (SGD) method has received much attention in the literature. In this paper, we focus instead on batch methods that use a sizeable fraction of the training set at each iteration to facilitate parallelism, and that employ second-order information. In order to improve the learning process, we follow a multi-batch approach in which the batch changes at each iteration. This can cause difficulties because L-BFGS employs gradient differences to update the Hessian approximations, and when these gradients are computed using different data points the process can be unstable. This paper shows how to perform stable quasi-Newton updating in the multi-batch setting, illustrates the behavior of the algorithm in a distributed computing platform, and studies its convergence properties for both the convex and nonconvex cases.

Live content is unavailable. Log in and register to view live content