Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Order up! The Benefits of Higher-Order Optimization in Machine Learning

The Trade-offs of Incremental Linearization Algorithms for Nonsmooth Composite Problems

Krishna Pillutla · Vincent Roulet · Sham Kakade · Zaid Harchaoui


Abstract:

Gauss-Newton methods and their stochastic version have been widely used in machine learning. Their non-smooth counterparts, modified Gauss-Newton or prox-linear algorithms, can lead to contrasted outcomes when compared to gradient descent in large scale settings. We explore the contrasting performance of these two classes of algorithms in theory on a stylized statistical example, and experimentally on learning problems including structured prediction.

Chat is not available.