Workshop
Fri Dec 8th 08:00 AM -- 06:30 PM @ Hall A
OPT 2017: Optimization for Machine Learning
Suvrit Sra · Sashank J. Reddi · Alekh Agarwal · Benjamin Recht

Dear NIPS Workshop Chairs,

We propose to organize the workshop:

       OPT 2017: Optimization for Machine Learning.


This year marks a major milestone in the history of OPT, as it will be the 10th anniversary edition of this long running NIPS workshop.

The previous OPT workshops enjoyed packed to overpacked attendance. This huge interest is no surprise: optimization is the 2nd largest topic at NIPS and is indeed foundational for the wider ML community.

Looking back over the past decade, a strong trend is apparent: The intersection of OPT and ML has grown monotonically to the point that now several cutting-edge advances in optimization arise from the ML community. The distinctive feature of optimization within ML is its departure from textbook approaches, in particular, by having a different set of goals driven by “big-data,” where both models and practical implementation are crucial.

This intimate relation between OPT and ML is the core theme of our workshop. OPT workshops have previously covered a variety of topics, such as frameworks for convex programs (D. Bertsekas), the intersection of ML and optimization, especially SVM training (S. Wright), large-scale learning via stochastic gradient methods and its tradeoffs (L. Bottou, N. Srebro), exploitation of structured sparsity (Vandenberghe), randomized methods for extremely large-scale convex optimization (A. Nemirovski), complexity theoretic foundations of convex optimization (Y. Nesterov), distributed large-scale optimization (S. Boyd), asynchronous and sparsity based stochastic gradient (B. Recht), algebraic techniques in machine learning (P. Parrilo), insights into nonconvex optimization (A. Lewis), sums-of-squares techniques (J. Lasserre), optimization in the context of deep learning (Y. Bengio), stochastic convex optimization (G. Lan), new views on interior point (E. Hazan), among others.

Several ideas propounded in these talks have become important research topics in ML and optimization --- especially in the field of randomized algorithms, stochastic gradient and variance reduced stochastic gradient methods. An edited book "Optimization for Machine Learning" (S. Sra, S. Nowozin, and S. Wright; MIT Press, 2011) grew out of the first three OPT workshops, and contains high-quality contributions from many of the speakers and attendees, and there have been sustained requests for the next edition of such a volume.

We wish to use OPT2017 as a platform to foster discussion, discovery, and dissemination of the state-of-the-art in optimization as relevant to machine learning. And even beyond that, as a platform to identify new directions and challenges that will drive future research.

Continuing its trend, the workshop will bring experts in optimization to share their perspectives while leveraging crossover experts in ML to share their views and recent advances. Our tentative invited speakers for this year are:

Distinction from other optimization workshops at NIPS:

Compared to the other optimization focused workshops that happen (or have happened) at NIPS, key distinguishing features of OPT are: (a) it provides a unique bridge between the ML community and the wider optimization community, and is the longest running NIPS workshop on optimization (since NIPS 2008); (b) it encourages theoretical work on an equal footing with practical efficiency; and (c) it caters to a wide body of NIPS attendees, experts and beginners alike; (d) it covers optimization in a broad-spectrum, with a focus on bringing in new optimization ideas from different communities into ML while identifying key future directions for the broader OPTML community.

## Organization

The main features of the proposed workshop are:

1. One day long with morning and afternoon sessions
2. Four invited talks by leading experts from optimization and ML
3. Contributed talks from the broader OPT and ML community
4. A panel discussion exploring key future research directions for OPTML.
08:50 AM Opening Remarks (Intro)
09:00 AM Poster Session
Tsz Kit (Tim) Lau, Johannes Maly, Nicolas Loizou, Christian Kroer, Yuan Yao, Youngsuk Park, Reka Agnes Kovacs, Dong Yin, Vlad Zhukov, Woosang Lim, David Barmherzig, Dimitris Metaxas, Bin Shi, Rajan Udwani, William Brendel, Yi Zhou, Vladimir braverman, Sijia Liu, Eugene Golikov
09:00 AM Invited Talk: Leon Bottou (Talk)
09:45 AM Invited Talk: Yurii Nesterov (Talk)
10:30 AM Coffee Break 1 (Break)
11:00 AM Spotlight: Oracle Complexity of Second-Order Methods for Smooth Convex Optimization (Spotlight)
11:15 AM Spotlight: Gradient Diversity: a Key Ingredient for Scalable Distributed Learning (Spotlight)
11:30 AM Invited Talk: Francis Bach (Talk)
12:15 PM Lunch Break (Break)
02:00 PM Invited Talk: Dimitri Bertsekas (Talk)
02:45 PM Spotlight: Lower Bounds for Finding Stationary Points of Non-Convex, Smooth High-Dimensional Functions (Spotlight)
03:00 PM Coffee Break 2 (Break)
03:30 PM Invited Talk : Pablo Parrilo (Talk)
04:15 PM Spotlight: Efficiently Optimizing over (Non-Convex) Cones via Approximate Projections (Spotlight)
04:30 PM Poster Session II (Posters)