Timezone: »

Non-convex Optimization for Machine Learning: Theory and Practice
Anima Anandkumar · Niranjan Uma Naresh · Kamalika Chaudhuri · Percy Liang · Sewoong Oh

Sat Dec 12 05:30 AM -- 03:30 PM (PST) @ 513 cd
Event URL: https://sites.google.com/site/nips2015nonconvexoptimization/home »

Non-convex optimization is ubiquitous in machine learning. In general, reaching the global optima of these problems is NP-hard and in practice, local search methods such as gradient descent can get stuck in spurious local optima and suffer from poor convergence.

Over the last few years, tremendous progress has been made in establishing theoretical guarantees for many of the non-convex optimization problems. While there are worst-case instances which are computationally hard to solve, focus has shifted in characterizing transparent conditions for cases which are tractable. In many instances, these conditions turn out to be mild and natural for machine learning applications.

One area of non-convex optimization which has attracted extensive interest is spectral learning. This involves finding spectral decomposition of matrices and tensors which correspond to moments of a multivariate distribution. These algorithms are guaranteed to recover a consistent solution to parameter estimation problem in many latent variable models such as topic admixture models, HMMs, ICA, and most recently, even non-linear models such as neural networks. In contrast to traditional algorithms like expectation maximization (EM), these algorithms come with polynomial computational and sample complexity guarantees. Analysis of these methods involves understanding the optimization landscape for tensor algebraic structures.

As another example of guaranteed non-convex methods, there has been interest in the problem of dictionary learning, which involves expressing the observed data as a sparse combination of dictionary elements. Recent results have established that both the dictionary and the coefficients can be consistently recovered in the challenging overcomplete case, where the number of dictionary elements can exceed the input dimensionality.

There is also interest in analyzing online algorithms for non-convex methods. A recent work has established that the simple stochastic gradient descent (SGD) with appropriately added noise can escape the saddle points and converge to a local optimum in bounded time for a large class of nonconvex problems. This is especially important since non-convex problems usually suffer from an exponential number of saddle points.

Finally, recent years have also seen novel applications of non-convex methods with rigorous guarantees. For example, many of these methods have shown great promise in diverse application domains such as natural language processing, social networks, health informatics, and biological sequence analysis.

There are certainly many challenging open problems in the area of non-convex optimization. While guarantees have been established in individual instances, there is no common unifying theme of what makes non-convex problem tractable. Many challenging instances such as optimization for training multi-layer neural networks or analyzing novel regularization techniques such as dropout for non-convex optimization still remain wide open. On the practical side, conversations between theorists and practitioners can help identify what kind of conditions are reasonable for specific applications, and thus lead to the design of practically motivated algorithms for non-convex optimization with rigorous guarantees.

This workshop will fill a very important gap in bringing researchers from disparate communities and bridging the gap between theoreticians and practitioners. To facilitate discussion between theorists and practitioners, we aim to make the workshop easily accessible to people currently unfamiliar with the intricate details of these methods. We plan to have an open problems session and a discussion session to spur further research in this area. There will also be invited poster session from top active student researchers in the area to increase quality participation in the workshop.

Sat 5:40 a.m. - 6:00 a.m. [iCal]
Opening and Overview (Talk)
Anima Anandkumar
Sat 6:10 a.m. - 6:40 a.m. [iCal]
Large-Scale Optimization for Deep Learning (Talk)
Yann LeCun
Sat 7:30 a.m. - 8:00 a.m. [iCal]
When Your Big Data Seems Too Small (Talk)
Gregory Valiant
Sat 8:00 a.m. - 8:30 a.m. [iCal]
Convolutional Dictionary Learning through Tensor Factorization (Talk)
Furong Huang
Sat 11:30 a.m. - 12:00 p.m. [iCal]
Provable algorithms for non convex optimization (Talk)
Sanjeev Arora
Sat 12:00 p.m. - 12:30 p.m. [iCal]
Non convex Optimization by Complexity Progression (Talk)
Hossein Mobahi
Sat 12:30 p.m. - 1:00 p.m. [iCal]
Computably Feasible Greedy Algorithms for Neural Nets (Talk)
Sat 12:30 p.m. - 1:00 p.m. [iCal]
Computably Feasible Greedy Algorithms for Neural Nets --- SPEAKER NOT REGISTERED (Talk)
Sat 1:30 p.m. - 2:00 p.m. [iCal]
Taking it Easy (Talk)
Christopher Ré
Sat 2:00 p.m. - 2:30 p.m. [iCal]
Spectral Algorithms for Learning HMMs and Tree HHMs for Epigenetics Data (Talk)
Kevin Chen

Author Information

Anima Anandkumar (UC Irvine)
Niranjan Uma Naresh (University of California Irvine)
Kamalika Chaudhuri (UCSD)
Percy Liang (Stanford University)
Sewoong Oh (UIUC)

More from the Same Authors