Skip to yearly menu bar Skip to main content


( events)   Timezone:  
Workshop
Fri Dec 08 08:00 AM -- 06:30 PM (PST) @ Hall A
OPT 2017: Optimization for Machine Learning
Suvrit Sra · Sashank J. Reddi · Alekh Agarwal · Benjamin Recht





Workshop Home Page

Dear NIPS Workshop Chairs,

We propose to organize the workshop:

OPT 2017: Optimization for Machine Learning.

This year marks a major milestone in the history of OPT, as it will be the 10th anniversary edition of this long running NIPS workshop.

The previous OPT workshops enjoyed packed to overpacked attendance. This huge interest is no surprise: optimization is the 2nd largest topic at NIPS and is indeed foundational for the wider ML community.

Looking back over the past decade, a strong trend is apparent: The intersection of OPT and ML has grown monotonically to the point that now several cutting-edge advances in optimization arise from the ML community. The distinctive feature of optimization within ML is its departure from textbook approaches, in particular, by having a different set of goals driven by “big-data,” where both models and practical implementation are crucial.

This intimate relation between OPT and ML is the core theme of our workshop. OPT workshops have previously covered a variety of topics, such as frameworks for convex programs (D. Bertsekas), the intersection of ML and optimization, especially SVM training (S. Wright), large-scale learning via stochastic gradient methods and its tradeoffs (L. Bottou, N. Srebro), exploitation of structured sparsity (Vandenberghe), randomized methods for extremely large-scale convex optimization (A. Nemirovski), complexity theoretic foundations of convex optimization (Y. Nesterov), distributed large-scale optimization (S. Boyd), asynchronous and sparsity based stochastic gradient (B. Recht), algebraic techniques in machine learning (P. Parrilo), insights into nonconvex optimization (A. Lewis), sums-of-squares techniques (J. Lasserre), optimization in the context of deep learning (Y. Bengio), stochastic convex optimization (G. Lan), new views on interior point (E. Hazan), among others.

Several ideas propounded in these talks have become important research topics in ML and optimization --- especially in the field of randomized algorithms, stochastic gradient and variance reduced stochastic gradient methods. An edited book "Optimization for Machine Learning" (S. Sra, S. Nowozin, and S. Wright; MIT Press, 2011) grew out of the first three OPT workshops, and contains high-quality contributions from many of the speakers and attendees, and there have been sustained requests for the next edition of such a volume.

We wish to use OPT2017 as a platform to foster discussion, discovery, and dissemination of the state-of-the-art in optimization as relevant to machine learning. And even beyond that, as a platform to identify new directions and challenges that will drive future research.

Continuing its trend, the workshop will bring experts in optimization to share their perspectives while leveraging crossover experts in ML to share their views and recent advances. Our tentative invited speakers for this year are:

Yurii Nesterov (already agreed)
Dimitri Bertsekas (already agreed)
Francis Bach (already agreed)

Distinction from other optimization workshops at NIPS:

Compared to the other optimization focused workshops that happen (or have happened) at NIPS, key distinguishing features of OPT are: (a) it provides a unique bridge between the ML community and the wider optimization community, and is the longest running NIPS workshop on optimization (since NIPS 2008); (b) it encourages theoretical work on an equal footing with practical efficiency; and (c) it caters to a wide body of NIPS attendees, experts and beginners alike; (d) it covers optimization in a broad-spectrum, with a focus on bringing in new optimization ideas from different communities into ML while identifying key future directions for the broader OPTML community.

Organization
----------------

The main features of the proposed workshop are:

1. One day long with morning and afternoon sessions
2. Four invited talks by leading experts from optimization and ML
3. Contributed talks from the broader OPT and ML community
4. A panel discussion exploring key future research directions for OPTML.

Opening Remarks (Intro)
Invited Talk: Leon Bottou (Talk)
Poster Session
Invited Talk: Yurii Nesterov (Talk)
Coffee Break 1 (Break)
Spotlight: Oracle Complexity of Second-Order Methods for Smooth Convex Optimization (Spotlight)
Spotlight: Gradient Diversity: a Key Ingredient for Scalable Distributed Learning (Spotlight)
Invited Talk: Francis Bach (Talk)
Lunch Break (Break)
Invited Talk: Dimitri Bertsekas (Talk)
Spotlight: Lower Bounds for Finding Stationary Points of Non-Convex, Smooth High-Dimensional Functions (Spotlight)
Coffee Break 2 (Break)
Invited Talk : Pablo Parrilo (Talk)
Spotlight: Efficiently Optimizing over (Non-Convex) Cones via Approximate Projections (Spotlight)
Poster Session II (Posters)