Timezone: »

 
Workshop
Discrete Optimization in Machine Learning: Structures, Algorithms and Applications
Andreas Krause · Pradeep Ravikumar · Jeffrey A Bilmes · Stefanie Jegelka

Sat Dec 11 07:30 AM -- 06:30 PM (PST) @ Hilton: Cheakmus
Event URL: http://www.discml.cc/ »

Solving optimization problems with ultimately discrete solutions is becoming increasingly important in machine learning: At the core of statistical machine learning is to infer conclusions from data, and when the variables underlying the data are discrete, both the tasks of inferring the model from data, as well as performing predictions using the estimated model are discrete optimization problems. Many of the resulting optimization problems are NP-hard, and typically, as the problem size increases, standard off-the-shelf optimization procedures become intractable.

Fortunately, most discrete optimization problems that arise in machine learning have specific structure, which can be leveraged in order to develop tractable exact or approximate optimization procedures. For example, consider the case of a discrete graphical model over a set of random variables. For the task of prediction, a key structural object is the marginal polytope,'' a convex bounded set characterized by the underlying graph of the graphical model. Properties of this polytope, as well as its approximations, have been successfully used to develop efficient algorithms for inference. For the task of model selection, a key structural object is the discrete graph itself. Another problem structure is sparsity: While estimating a high-dimensional model for regression from a limited amount of data is typically an ill-posed problem, it becomes solvable if it is known that many of the coefficients are zero. Another problem structure, submodularity, a discrete analog of convexity, has been shown to arise in many machine learning problems, including structure learning of probabilistic models, variable selection and clustering. One of the primary goals of this workshop is to investigate how to leverage such structures. <br> <br>There are two major classes of approaches towards solving such discrete optimization problems machine learning: Combinatorial algorithms and continuous relaxations. In the first, the discrete optimization problems are solved directly in the discrete constraint space of the variables. Typically these take the form of search based procedures, where the discrete structure is exploited to limit the search space. In the other, the discrete problems are transformed into continuous, often tractable convex problems by relaxing the integrality constraints. The exact fractional solutions are thenrounded'' back to the discrete domain. Another goal of this workshop is to bring researchers in these two communities together in order to discuss (a) tradeoffs and respective benefits of the existing approaches, and (b) problem structures suited to the respective approaches. For instance submodular problems can be tractably solved using combinatorial algorithms; similarly, in certain cases, the continuous relaxations yield discrete solutions that are either exact or with objective within a multiplicative factor of the true solution. In addition to studying discrete structures and algorithms, the workshop will put a particular emphasis on novel applications of discrete optimization in machine learning.
\par\noindent
Please see workshop webpage (http://www.discml.cc) for the final schedule.

Author Information

Andreas Krause (ETHZ)
Pradeep Ravikumar (Carnegie Mellon University)
Jeffrey A Bilmes (University of Washington, Seattle)

Jeffrey A. Bilmes is a professor at the Department of Electrical and Computer Engineering at the University of Washington, Seattle Washington. He is also an adjunct professor in Computer Science & Engineering and the department of Linguistics. Prof. Bilmes is the founder of the MELODI (MachinE Learning for Optimization and Data Interpretation) lab here in the department. Bilmes received his Ph.D. from the Computer Science Division of the department of Electrical Engineering and Computer Science, University of California in Berkeley and a masters degree from MIT. He was also a researcher at the International Computer Science Institute, and a member of the Realization group there. Prof. Bilmes is a 2001 NSF Career award winner, a 2002 CRA Digital Government Fellow, a 2008 NAE Gilbreth Lectureship award recipient, and a 2012/2013 ISCA Distinguished Lecturer. Prof. Bilmes was, along with Andrew Ng, one of the two UAI (Conference on Uncertainty in Artificial Intelligence) program chairs (2009) and then the general chair (2010). He was also a workshop chair (2011) and the tutorials chair (2014) at NIPS/NeurIPS (Neural Information Processing Systems), and is a regular senior technical chair at NeurIPS/NIPS since then. He was an action editor for JMLR (Journal of Machine Learning Research). Prof. Bilmes's primary interests lie in statistical modeling (particularly graphical model approaches) and signal processing for pattern classification, speech recognition, language processing, bioinformatics, machine learning, submodularity in combinatorial optimization and machine learning, active and semi-supervised learning, and audio/music processing. He is particularly interested in temporal graphical models (or dynamic graphical models, which includes HMMs, DBNs, and CRFs) and ways in which to design efficient algorithms for them and design their structure so that they may perform as better structured classifiers. He also has strong interests in speech-based human-computer interfaces, the statistical properties of natural objects and natural scenes, information theory and its relation to natural computation by humans and pattern recognition by machines, and computational music processing (such as human timing subtleties). He is also quite interested in high performance computing systems, computer architecture, and software techniques to reduce power consumption. Prof. Bilmes has also pioneered (starting in 2003) the development of submodularity within machine learning, and he received a best paper award at ICML 2013, a best paper award at NIPS 2013, and a best paper award at ACMBCB in 2016, all in this area. In 2014, Prof. Bilmes also received a most influential paper in 25 years award from the International Conference on Supercomputing, given to a paper on high-performance matrix optimization. Prof. Bilmes has authored the graphical models toolkit (GMTK), a dynamic graphical-model based software system widely used in speech, language, bioinformatics, and human-activity recognition.

Stefanie Jegelka (MIT)

More from the Same Authors