Workshop
Optimization for Machine Learning (OPT2015)
Suvrit Sra · Alekh Agarwal · Leon Bottou · Sashank J. Reddi
510 ac
Fri 11 Dec, 5:30 a.m. PST
Dear NIPS Workshop Chairs,
We propose to organize the workshop
OPT2015: Optimization for Machine Learning.
As the eighth in its series, OPT 2015 builds on significant precedent established by OPT 2008--OPT 2014, all of which have been remarkably well-received NIPS workshops.
The previous OPT workshops enjoyed packed (to overpacked) attendance, and this enthusiastic reception is an attestation to the great importance of optimization within machine learning.
The intersection of OPT and ML has grown monotonically over the years, to the extent that now many cutting edge advances in optimization are arising from the ML community. The driving feature is the departure of algorithms from textbook approaches, in particular by paying attention to problem specific structure and to deployability in practical (even industrial) big-data settings.
This intimate relation of optimization with ML is the key motivation for our workshop. We wish to use OPT2015 as a platform to foster discussion, discovery, and dissemination of the state-of-the-art in optimization as relevant to machine learning.
As in the past years, the workshop will continue to bring luminaries from the field of optimization to share classical perspectives, as well as give a platform for thought leaders from machine learning to share exciting recent advances. To this end, our tentative invited speakers for this year are: Elad Hazan, Guanghui Lan and Jorge Nocedal. Additionally, we will hope to continue the tradition of high quality contributed talks and posters.
I. INTRODUCTION
-------------
OPT workshops have previously covered a variety of topics, such as frameworks for convex programs (D. Bertsekas), the intersection of ML and optimization, especially SVM training (S. Wright), large-scale learning via stochastic gradient methods and its tradeoffs (L. Bottou, N. Srebro), exploitation of structured sparsity (Vandenberghe), randomized methods for extremely large-scale convex optimization (A. Nemirovski), complexity theoretic foundations of convex optimization (Y. Nesterov), distributed large-scale optimization (S. Boyd), asynchronous and sparsity based stochastic gradient (B. Recht), algebraic techniques in machine learning (P. Parillo), insights into nonconvex optimization (A. Lewis), sums-of-squares techniques (J. Lasserre), optimization in the context of deep learning (Y. Bengio), among others.
Several ideas propounded in these talks have become important research topics in ML and optimization --- especially in the field of randomized algorithms and stochastic gradient methods. An edited book "Optimization for Machine Learning" (S. Sra, S. Nowozin, and S. Wright; MIT Press, 2011) grew out of the first three OPT workshops, and contains high-quality contributions from many of the speakers and attendees.
Much of the recent focus has been on large-scale first-order convex optimization algorithms for machine learning, both from a theoretical and methodological point of view. Covered topics included stochastic gradient algorithms, (accelerated) proximal algorithms, decomposition and coordinate descent algorithms, parallel and distributed optimization. Theoretical and practical advances in these methods remain a topic of core interest to the workshop. Recent years have also seen interesting advances in non-convex optimization such as a growing body of results on alternating minimization, tensor factorization etc.
We also do not wish to ignore the not particularly large scale setting, where one does have time to wield substantial computational resources. In this setting, high-accuracy solutions and deep understanding of the lessons contained in the data are needed. Examples valuable to MLers may be exploration of genetic and environmental data to identify risk factors for disease; or problems dealing with setups where the amount of observed data is not huge, but the mathematical model is complex. Consequently, we encourage optimization methods on manifolds, ML problems with differential geometric antecedents, those using advanced algebraic techniques, and computational topology, for instance.
Live content is unavailable. Log in and register to view live content