Skip to yearly menu bar Skip to main content


Workshop

Optimization for Machine Learning

Suvrit Sra · Sebastian Nowozin · Stephen Wright
Dec 10, 7:30 AM - 6:30 PM Westin: Emerald A

Our workshop focuses on optimization theory and practice that is relevant to machine learning. This proposal builds on precedent established by two of our previously well-received NIPS workshops:

(@NIPS08) http://opt2008.kyb.tuebingen.mpg.de/
(@NIPS
09) http://opt.kyb.tuebingen.mpg.de/

Both these workshops had packed (often overpacked) attendance almost throughout the day. This enthusiastic reception reflects the strong interest, relevance, and importance enjoyed by optimization in the greater ML community.

One could ask why does optimization attract such continued interest? The answer is simple but telling: optimization lies at the heart of almost every ML algorithm. For some algorithms textbook methods suffice, but the majority require tailoring algorithmic tools from optimization, which in turn depends on a deeper understanding of the ML requirements. In fact, ML applications and researchers are driving some of the most cutting-edge developments in optimization today. The intimate relation of optimization with ML is the key motivation for our workshop, which aims to foster discussion, discovery, and dissemination of the state-of-the-art in optimization, especially in the context of ML.

The workshop should realize its aims by:

* Providing a platform for increasing the interaction between researchers from optimization, operations research, statistics, scientific computing, and machine learning;
* Identifying key problems and challenges that lie at the intersection of optimization and ML;
* Narrowing the gap between optimization and ML, to help reduce rediscovery, and thereby accelerating new advances.

ADDITIONAL BACKGROUND AND MOTIVATION

Previous talks at the OPT workshops have covered frameworks for convex programs (D. Bertsekas), the intersection of ML and optimization, especially in the area of SVM training (S. Wright), large-scale learning via stochastic gradient methods and its tradeoffs (L. Bottou, N. Srebro), exploitation of structured sparsity in optimization (Vandenberghe), and randomized methods for extremely large-scale convex optimization (A. Nemirovski). Several important realizations were brought to the fore by these talks, and many of the dominant ideas will appear in our book (to be published by MIT Press) on Optimization for Machine learning.

Given the above background it is easy to acknowledge that optimization is indispensable to machine learning. But what more can we say beyond this obvious realization?

The ML community's interest in optimization continues to grow. Invited tutorials on optimization will be presented this year at ICML (N. Srebro) and NIPS (S. Wright). The traditional `point of contact'' between ML and optimization - SVM - continues to be a driver of research on a number of fronts. Much interest has focused recently on stochastic gradient methods, which can be used in an online setting and in settings where data sets are extremely large and high accuracy is not required. Regularized logistic regression is another area that has produced a recent flurry of activity at the intersection of the two communities. Many aspects of stochastic gradient remain to be explored, for example, different algorithmic variants, customizing to the data set structure, convergence analysis, sampling techniques, software, choice of regularization and tradeoff parameters, parallelism. There also needs to be a better understanding of the limitations of these methods, and what can be done to accelerate them or to detect when to switch to alternative strategies. In the logistic regression setting, use of approximate second-order information has been shown to improve convergence, but many algorithmic issues remain. Detection of combined effects predictors (which lead to a huge increase in the number of variables), use of group regularizers, and dealing with the need to handle very large data sets in real time all present challenges. <br> <br>To avoid becoming lopsided, in our workshop we will also admit thenot particularly large scale' setting, where one has time to wield substantial computational resources. In this setting, high-accuracy solutions and deep understanding of the lessons contained in the data are needed. Examples valuable to MLers may be exploration of genetic and environmental data to identify risk factors for disease; or problems dealing with setups where the amount of observed data is not huge, but the mathematical models are complex.

Show more
View full details
Workshop

Machine Learning in Computational Biology

Gunnar Rätsch · Jean-Philippe Vert · Tomer Hertz · Yanjun Qi
Dec 10, 7:30 AM - 6:30 PM Hilton: Sutcliffe B

The field of computational biology has seen dramatic growth over the past few years, both in terms of new available data, new scientific questions, and new challenges for learning and inference. In particular, biological data is often relationally structured and highly diverse, well-suited to approaches that combine multiple weak evidence from heterogeneous sources. These data may include sequenced genomes of a variety of organisms, gene expression data from multiple technologies, protein expression data, protein sequence and 3D structural data, protein interactions, gene ontology and pathway databases, genetic variation data (such as SNPs), and an enormous amount of textual data in the biological and medical literature. New types of scientific and clinical problems require the development of novel supervised and unsupervised learning methods that can use these growing resources.

The goal of this workshop is to present emerging problems and machine learning techniques in computational biology. We invited several speakers from the biology/bioinformatics community who will present current research problems in bioinformatics, and we invite contributed talks on novel learning approaches in computational biology. We encourage contributions describing either progress on new bioinformatics problems or work on established problems using methods that are substantially different from standard approaches. Kernel methods, graphical models, feature selection and other techniques applied to relevant bioinformatics problems would all be appropriate for the workshop.

Show more
View full details
Workshop

Monte Carlo Methods for Bayesian Inference in Modern Day Applications

Ryan Adams · Mark A Girolami · Iain Murray
Dec 10, 7:30 AM - 6:30 PM Hilton: Mt Currie North

Monte Carlo methods have been the dominant form of approximate inference for Bayesian statistics over the last couple of decades. Monte Carlo methods are interesting as a technical topic of research in themselves, as well as enjoying widespread practical use. In a diverse number of application areas Monte Carlo methods have enabled Bayesian inference over classes of statistical models which previously would have been infeasible. Despite this broad and sustained attention, it is often still far from clear how best to set up a Monte Carlo method for a given problem, how to diagnose if it is working well, and how to improve under-performing methods. The impact of these issues is even more pronounced with new emerging applications. This workshop is aimed equally at practitioners and core Monte Carlo researchers. For practitioners we hope to identify what properties of applications are important for selecting, running and checking a Monte Carlo algorithm. Monte Carlo methods are applied to a broad variety of problems. The workshop aims to identify and explore what properties of these disparate areas are important to think about when applying Monte Carlo methods.
\\
The workshop wiki contains a more detailed list of discussion topics and recommended background reading. We welcome contributions: anyone can create an account and edit the wiki.

Show more
View full details
Workshop

Computational Social Science and the Wisdom of Crowds

Jennifer Wortman Vaughan · Hanna Wallach
Dec 10, 7:30 AM - 6:30 PM Westin: Callaghan

Computational social science is an emerging academic research area at the intersection of computer science, statistics, and the social sciences, in which quantitative methods and computational tools are used to identify and answer social science questions. The field is driven by new sources of data from the Internet, sensor networks, government databases, crowdsourcing systems, and more, as well as by recent advances in computational modeling, machine learning, statistics, and social network analysis. \par

The related area of social computing deals with the mechanisms through which people interact with computational systems, examining how and why people contribute to crowdsourcing sites, and the Internet more generally. Examples of social computing systems include prediction markets, reputation systems, and collaborative filtering systems, all designed with the intent of capturing the wisdom of crowds. \par

Machine learning plays in important role in both of these research areas, but to make truly groundbreaking advances, collaboration is necessary: social scientists and economists are uniquely positioned to identify the most pertinent and vital questions and problems, as well as to provide insight into data generation, while computer scientists contribute significant expertise in developing novel, quantitative methods and tools. To date there have been few in-person venues for researchers in these traditionally disparate areas to interact. This workshop will address this need, with an emphasis on the role of machine learning, making NIPS an ideal venue. We hope to attract a mix of established members of the NIPS community and researchers who have never attended NIPS and will provide an entirely new perspective. \par

The primary goals of the workshop are to provide an opportunity for attendees to meet, interact, share ideas, establish new collaborations, and to inform the wider NIPS community about current research in computational social science and social computing. \par

Program Committee: Lars Backstrom (Cornell University), Jordan Boyd-Graber (University of Maryland), Jonathan Chang (Facebook), Sanmay Das (Rensselaer Polytechnic Institute), Ofer Dekel (Microsoft Research), Laura Dietz (Max Planck Institute for Computer Science), Arpita Ghosh (Yahoo! Research), John Horton (Harvard University), Shaili Jain (Yale University), David Jensen (University of Massachusetts, Amherst), Lian Jian (Annenberg School of Communications, University of Southern California), Edith Law (Carnegie Mellon University), David Lazer (Political Science and Computer Science, Northeastern University \& Kennedy School of Government, Harvard University), Winter Mason (Yahoo! Research), Andrew McCallum (University of Massachusetts, Amherst), Mary McGlohon (Google), Daniel Ramage (Stanford University), Noah Smith (Carnegie Mellon University), Victoria Stodden (Yale Law School), and Sid Suri (Yahoo! Research). \par

Please visit the workshop website for up-to-date information about the schedule, including the schedule of posters.

Show more
View full details
Workshop

Beyond classification: Machine Learning for next generation Computer Vision challenges

Craig Saunders · Jakob Verbeek · Svetlana Lazebnik
Dec 10, 7:30 AM - 6:30 PM Westin: Alpine BC

This workshop seeks to excite and inform researchers to tackle the next level of problems in the area of Computer Vision. The idea is to both give Computer Vision researchers access to the latest Machine Learning research, and also to communicate to researchers in the machine learning community some of the latest challenges in computer vision, in order to stimulate the emergence of the next generation of learning techniques. The workshop itself is motivated from several different points of view:

\begin{enumerate}
\item There is a great interest in and take-up of machine learning techniques in the computer vision community. In top vision conferences such as CVPR, machine learning is prevalent: there is widespread use of Bayesian Techniques, Kernel Methods, Structured Prediction, Deep Learning, etc.; and many vision conferences have featured invited speakers from the machine learning community.

\item Despite the quality of this research and the significant adoption of machine learning techniques, often such techniques are used as black box'' parts of a pipeline, performing traditional tasks such as classification or feature selection, rather than fundamentally taking a learning approach to solving some of the unique problems arising in real-world vision applications. <br> <br>\item Beyond object recognition and robot navigation, many interesting problems in computer vision are less well known. These include more <br>complex tasks such as joint geometric/semantic scene parsing, object discovery, modeling of visual attributes, image aesthetics, etc. <br> <br>\item Even within the domain ofclassic'' recognition systems, we also face significant challenges in scaling up machine learning techniques to millions of images and
thousands of categories (consider for example the
ImageNet data set).

\item Images often come with extra multi-modal information (social network graphs, user preference, implicit feedback indicators, etc) and this information is often poorly used, or integrated in an ad-hoc fashion.

\end{enumerate}

This workshop therefore seeks to bring together machine learning and computer vision researchers to discuss these challenges, show current progress, highlight open questions and stimulate promising future research.

Show more
View full details
Workshop

Machine Learning for Assistive Technologies

Jesse Hoey · Pascal Poupart · Thomas Ploetz
Dec 10, 7:30 AM - 6:30 PM Westin: Glacier

An aging demographic has been identified as a challenge for healthcare provision, with technology tipped to play an increasingly significant role. Already, assistive technologies for cognitive and physical disabilities are being developed at an increasingly rapid rate. However, the use of complex technological solutions by specific and diverse user groups is a significant challenge for universal design. For example, 'smart homes' that recognise inhabitant activities for assessment and assistance have not seen significant uptake by target user groups. The reason for this is primarily that user requirements for this type of technology are very diverse, making a single universal design extremely challenging. Machine learning techniques are therefore playing an increasing role in allowing assistive technologies to be adaptive to persons with diverse needs. However, the ability to adapt to these needs carries a number of theoretical challenges and research directions, including but not limited to decision making under uncertainty, sequence modeling, activity recognition, active learning, hierarchical models, sensor networks, computer vision, preference elicitation, interface design and game theory. This workshop will expose the research area of assistive technology to machine learning specialists, will provide a forum for machine learning researchers and medical/industrial practitioners to brainstorm about the main challenges, and will lead to developments of new research ideas and directions in which machine learning approaches are applied to complex assistive technology problems.

Show more
View full details
Workshop

Modeling Human Communication Dynamics

Louis-Philippe Morency · Daniel Gatica-Perez · Nigel G Ward
Dec 10, 7:30 AM - 6:30 PM Westin: Alpine A

Modeling human communicative dynamics brings exciting new problems and challenges to the NIPS community. The first goal of this workshop is to raise awareness in the machine learning community of these problems, including some applications needs, the special properties of these input streams, and the modeling challenges. The second goal is to exchange information about methods, techniques, and algorithms suitable for modeling human communication dynamics.

Face-to-face communication is a highly interactive process in which the participants mutually exchange and interpret verbal and nonverbal messages. Both the interpersonal dynamics and the dynamic interactions among an individual\'s perceptual, cognitive, and motor processes are swift and complex. How people accomplish these feats of coordination is a question of great scientific interest. Models of human communication dynamics also have much potential practical value, for applications including the understanding of communications problems such as autism and the creation of socially intelligent robots able to recognize, predict, and analyze verbal and nonverbal behaviors in real-time interaction with humans.

Show more
View full details
Workshop

Machine Learning in Online Advertising

James G Shanahan · Deepak Agarwal · Tao Qin · Tie-Yan Liu
Dec 10, 7:30 AM - 6:30 PM Hilton: Diamond Head
Over the past 15 years online advertising, a \$$65 billion industry worldwide in 2009, has been pivotal to the success of the world wide web. This success has arisen largely from the transformation of the advertising industry from a low-tech, human intensive, ``Mad Men'' (ref AMC TV Series) way of doing work (that were common place for much of the 20th century and the early days of online advertising) to highly optimized, mathematical, machine learning-centric processes (some of which have been adapted from Wall Street) that form the backbone of many current online advertising systems.

The dramatic growth of online advertising poses great challenges to the machine learning research community and calls for new technologies to be developed. Online advertising is a complex problem, especially from machine learning point of view. It contains multiple parties (i.e., advertisers, users, publishers, and ad platforms), which interact with each other and also have conflict of interests. It is highly dynamic in terms of the rapid change of user information needs, non-stationary bids of advertisers, and the frequent occurrences of ads campaigns. It is of very large scale, with billions of keywords, tens of millions of ads, billions of users, millions of advertisers where events such as clicks and actions can be extremely rare. In addition, the field lies at intersection of machine learning, economics, optimization, distributed systems and information science. For such a complex problem, conventional machine learning technologies and evaluation methodologies might not be sufficient, and the development of new algorithms and theories is sorely needed.

The goal of this workshop is to overview the state of the art in online advertising, and to discuss future directions and challenges in research and development, from a machine learning point of view. We expect the workshop to help develop a
community of researchers who are interested in this area, and yield future collaboration and exchanges.


Possible topics include:
1) Dynamic/non-stationary/online learning algorithms for online advertising
2) Large scale machine learning for online advertising
3) Learning theory for online advertising
4) Learning to rank for ads display
5) Auction mechanism design for paid search, social network advertising and microblog advertising
6) System modeling for ad platform
7) Traffic and click through rate prediction
8) Bids optimization
9) Metrics and evaluation
10) Yield optimisation
11) Behavioral targeting modeling
12) Click fraud detection
13) Privacy in advertising
14) Crowd sourcing and inference
15) Mobile advertising and social advertising
16) Public datasets creation for research on online advertising
Show more
View full details
Workshop

Tensors, Kernels, and Machine Learning

Tamara G Kolda · Vicente Malave · David F Gleich · Johan Suykens · Marco Signoretto · Andreas Argyriou
Dec 10, 7:30 AM - 6:30 PM Westin: Nordic

Tensors are a generalization of vectors and matrices to high
dimensions. The goal of this workshop is to explore the links between
tensors and information processing. We expect that many problems in, for
example, machine learning and kernel methods can benefit from being
expressing as tensor problems; conversely, the tensor community may
learn from the estimation techniques commonly used in information
processing and from some of the kernel extensions to nonlinear models.

On the other hand, standard tensor-based techniques can only deliver
multi-linear models. As a consequence, they may suffer from limited
discriminative power. A properly defined kernel-based extension might
overcome this limitation. The statistical machine learning community has
much to offer on different aspects such as learning (supervised,
unsupervised and semi-supervised) and generalization, regularization
techniques, loss functions and model selection.\[1ex]

The goal of this workshop is to promote the cross-fertilization between
machine learning and tensor-based techniques. \[1ex]

This workshop is appropriate for anyone who wishes to learn more about
tensor methods and/or share their machine learning or kernel techniques
with the tensor community; conversely, we invite contributions from
tensor experts seeking to use tensors for problems in machine learning
and information processing. \[1ex]

We hope to discuss the following topics:

\begin{itemize}
\item Applications using tensors for information processing (e.g. image
recognition, EEG, text analysis, diffusion weighted tensor imaging,
etc.) as well as the appropriateness of tensors models for various
information processing tasks.

\item Specialized tensor decompositions that may be of interest for
informatics processing (e.g. nonnegative factorizations, specialized
objective functions or constraints, symmetric factorizations, handling
missing data, handling special types of noise).

\item Information processing techniques that have connections to tensor
representations and factorizations, such as nonlinear kernel methods,
multi-task learning, specialized learning algorithms that can be
adapted to tensor factorizations.


\item Theoretical questions of interest in applying tensor information
processing methods (e.g. questions surrounding tensor rank, extension
of nuclear norm to tensors).

\end{itemize}

Show more
View full details
Workshop

Practical Application of Sparse Modeling: Open Issues and New Directions

Irina Rish · Alexandru Niculescu-Mizil · Guillermo Cecchi · Aurelie Lozano
Dec 10, 7:30 AM - 6:30 PM Hilton: Sutcliffe A

Sparse modeling is a rapidly developing area at the intersection of statistics, machine-learning and signal processing, focused on the problem of variable selection in high-dimensional datasets. Selection (and, moreover, construction) of a small set of highly predictive variables is central to many applications where the ultimate objective is to enhance our understanding of underlying physical, biological and other natural processes, beyond just building accurate `black-box' predictors.

\par Recent years have witnessed a flurry of research on algorithms and theory for sparse modeling, mainly focused on l1-regularized optimization, a convex relaxation of the (NP-hard) smallest subset selection problem. Examples include sparse regression, such as Lasso and its various extensions, such as Elastic Net, fused Lasso, group Lasso, simultaneous (multi-task) Lasso, adaptive Lasso, bootstrap Lasso, etc.), sparse graphical model selection, sparse dimensionality reduction (sparse PCA, CCA, NMF, etc.) and learning dictionaries that allow sparse representations. Applications of these methods are wide-ranging, including computational biology, neuroscience, image processing, stock market prediction and social network analysis, as well as compressed sensing, an extremely fast-growing area of signal processing.

\par However, is the promise of sparse modeling realized in practice? It turns out that, despite the significant advances in the field, a number of open issues remain when sparse modeling meets real-life applications. Below we only mention a few of them (see the workshop website for a more detailed discussion): stability of sparse models; selection of the right'' regularization parameter/model selection; findingright'' representation (dictionary learning); handling structured sparsity; evaluation of the results, interpretability.

\par We would like to invite researchers working on methodology, theory and especially applications of sparse modeling to share their experiences and insights into both the basic properties of the methods, and the properties of the application domains that make particular methods more (or less) suitable. Moreover, we plan to have a brainstorming session on various open issues, including (but not limited to) the ones mentioned above, and hope to come up with a set of new research directions motivated by problems encountered in practical applications.

\par We welcome submissions on various practical aspects of sparse modeling, specifically focusing on the following questions:
Does sparse modeling provide a meaningful interpretation of interest to domain experts? What other properties of the sparse models are desirable for better interpretability? How robust is the method with respect to various type of noise in the data? What type of method (e.g., combination of regularizers) is best-suited for a particular application and why? What is the best representation allowing for sparse modeling in your domain? How do you find such a representation efficiently? How is the model evaluated with respect to its structure-recovery quality?

Show more
View full details
Workshop

Deep Learning and Unsupervised Feature Learning

Honglak Lee · Marc'Aurelio Ranzato · Yoshua Bengio · Geoffrey E Hinton · Yann Lecun · Andrew Y Ng
Dec 10, 7:30 AM - 6:30 PM Hilton: Cheakmus

In recent years, there has been a lot of interest in algorithms that learn feature hierarchies from unlabeled data. Deep learning methods such as deep belief networks, sparse coding-based methods, convolutional networks, and deep Boltzmann machines, have shown promise and have already been successfully applied to a variety of tasks in computer vision, audio processing, natural language processing, information retrieval, and robotics.


In this workshop, we will bring together researchers who are interested in deep learning and unsupervised feature learning, review the recent technical progress, discuss the challenges, and identify promising future research directions. Through invited talks, panel discussions and presentations by attendees we will attempt to address some of the most important topics in deep learning today. We will discuss whether and why hierarchical systems are beneficial, what principles should guide the design of objective functions used to train these models, what are the advantages and disadvantages of bottom-up versus top-down approaches, how to design scalable systems, and how deep models can relate to biological systems. Finally, we will try to identify some of the major milestones and goals we would like to achieve during the next 5 or 10 years of research in deep learning.

Show more
View full details
Workshop

Robust Statistical Learning

Pradeep Ravikumar · Constantine Caramanis · Sujay Sanghavi
Dec 10, 7:30 AM - 6:30 PM Hilton: Mt Currie South

At the core of statistical machine learning is to infer conclusions from data, typically using statistical models that describe probabilistic relationships among the underlying variables. Such modeling allows us to make strong predictions even from limited data by leveraging specific problem structure. However on the flip side, when the specific model assumptions do not exactly hold, the resulting methods may deteriorate severely. A simple example: even a few corrupted points, or points with a few corrupted entries, can severely throw off standard SVD-based PCA.

The goal of this workshop is to investigate this robust learning'' setting where the data deviate from the model assumptions in a variety of different ways. Depending on what is known about the deviations, we can have a spectrum of approaches: <br> <br>(a) Dirty Models: Statistical models that imposeclean'' structural assumptions such as sparsity, low-rank etc. have proven very effective at imposing bias without being overly restrictive. A superposition of two (or more) such clean models can provide a method that is also robust. For example, approximating data by the sum of a sparse matrix and a low-rank one leads to PCA that is robust to corrupted entries.

(b) Robust Optimization: Most statistical learning methods implicitly or explicitly have an underlying optimization problem. Robust optimization uses techniques from convexity and duality, to construct solutions that are immunized from some bounded level of uncertainty, typically expressed as bounded (but otherwise arbitrary, i.e., adversarial) perturbations of the decision parameters.

(c) Classical Robust Statistics; Adversarial Learning: There has been a large body of work on classical robust statistics, which develops estimation methods that are robust to misspecified modeling assumptions in general, and do not model the outliers specifically. While this area is still quite active, it has a long history, with many results developed in the 60s, 70s and 80s. There has also been significant recent work in adversarial machine learning.

Thus, we see that while there has been a resurgence of robust learning methods (broadly understood) in recent years, it seems to be largely coming from different communities that rarely interact: (classical) robust statistics, adversarial machine learning, robust optimization, and multi-structured or dirty model learning. It is the aim of this workshop to bring together researchers from these different communities, and identify common intuitions underlying such robust learning methods. Indeed, with increasingly high-dimensional and ``dirty'' real world data that do not conform to clean modeling assumptions, this is a vital necessity.

Show more
View full details
Workshop

Coarse-to-Fine Learning and Inference

Ben Taskar · David J Weiss · Benjamin J Sapp · Slav Petrov
Dec 10, 7:30 AM - 6:30 PM Westin: Alpine DE

The bottleneck in many complex prediction problems is the prohibitive cost of inference or search at test time. Examples include structured problems such as object detection and segmentation, natural language parsing and translation, as well as standard classification with kernelized or costly features or a very large number of classes. These problems present a fundamental trade-off between approximation error (bias) and inference or search error due to computational constraints as we consider models of increasing complexity. This trade-off is much less understood than the traditional approximation/estimation (bias/variance) trade-off but is constantly encountered in machine learning applications. The primary aim of this workshop is to formally explore this trade-off and to unify a variety of recent approaches, which can be broadly described as coarse-to-fine'' methods, that explicitly learn to control this trade-off. Unlike approximate inference algorithms, coarse-to-fine methods typically involve exact inference in a coarsened or reduced output space that is then iteratively refined. They have been used with great success in specific applications in computer vision (e.g., face detection) and natural language processing (e.g., parsing, machine translation). However, coarse-to-fine methods have not been studied and formalized as a general machine learning problem. Thus many natural theoretical and empirical questions have remained un-posed; e.g., when will such methods succeed, what is the fundamental theory linking these applications, and what formal guarantees can be found? <br> <br>\par In order to begin asking and answering these questions, our workshop will bring together researchers from machine learning, computer vision, and natural language processing who are addressing large-scale prediction problems where inference cost is a major bottleneck. To this end, a significant portion of the workshop will be given over to discussion, in the form of two organized panel discussions and a small poster session. We have taken care to invite speakers who come from each of the research areas mentioned above, and we intend to similarly ensure that the panels are comprised of speakers from multiple communities. Furthermore, because thecoarse-to-fine'' label is broadly interpreted across many different fields, we also invite any submission that involves learning to address the bias/computation trade-off or that provides new theoretical insight into this problem. We anticipate that this workshop will lead to concrete new research directions in the analysis and development of coarse-to-fine and other methods that address the bias/computation trade-off, including the establishment of several benchmark problems to allow easier entry by researchers who are not domain experts into this area.

Show more
View full details
Workshop

Decision Making with Multiple Imperfect Decision Makers

Miroslav Karny · Tatiana V. Guy · David H Wolpert
Dec 10, 7:30 AM - 6:30 PM Hilton: Black Tusk

Prescriptive Bayesian decision making has reached a high level of maturity supported by efficient, theoretically well-founded algorithms. While the long-standing problem of participant's rationality is addressed repeatedly, limited cognitive, acting and evaluative abilities/resources of participants involved have not been considered systematically. This problem of so-called imperfect decision makers emerges repeatedly, for instance, i) consistent theory of incomplete Bayesian games cannot be applied by them; ii) a desirable incorporation of deliberation effort into the design of decision strategies remains unsolved.

Societal, biological, engineered systems exhibit paradigms that can extend the scope of existing knowledge in prescriptive decision making. Societal and natural sciences and partially technology have considered imperfection aspects at the descriptive level. In particular, a broadly studied emerging behaviour resulting from descriptive properties of interacting imperfect decision makers can be exploited at prescriptive decision making. The goal of this workshop is to explore such connections between descriptive and prescriptive decision making and stimulate an exchange the results and ideas. The workshop will foster discussion of bounded-rationality and imperfection of decision-makers in light of Nature. We believe that in long-term perspective, the workshop will contribute to solution of the problems:

A. How to formalise rational decision making of an imperfect participant?
B. How to create a feasible prescriptive theory, which respects an imperfect participant?
C. How to extend/modify existing feasible prescriptive theories to imperfect decision-makers?
This topic spans both theoretical issues and the development of effective algorithms and it is closely related to the problem of control under varying/uncertain resources' constraints and to the problem of decision-making cost.
The workshop aims to bring together different scientific communities, to brainstorm on possible research directions, and to encourage collaboration among researchers with complementary ideas and expertise. The workshop will be based on invited talks, contributed talks and posters. Extensive moderated and informal discussions ensure targeted exchange.

Show more
View full details
Workshop

Machine Learning for Social Computing

Zenglin Xu · Irwin King · Shenghuo Zhu · Yuan Qi · Rong Yan · John Yen
Dec 11, 7:30 AM - 6:30 PM Westin: Alpine A

Social computing aims to support the online social behavior through computational methods. The explosion of the Web has created and been creating social interactions and social contexts through the use of software, services and technologies, such as blogs, microblogs (Tweets), wikis, social network services, social bookmarking, social news, multimedia sharing sites, online auctions, reputation systems, and so on. Analyzing the information underneath the social interactions and social context, e.g., community detection, opinion mining, trend prediction, anomaly detection, product recommendation, expert finding, social ranking, information visualization, will benefit both of information providers and information consumers in the application areas of social sciences, economics, psychologies and computer sciences. However, the large volumes of user-generated contents and the complex structures among users and related entities require effective modeling methods and efficient solving algorithms, which therefore bring challenges to advanced techniques in machine learning. There are three major concerns:

1. How to effectively and accurately model the related task as a learning problem?
2. How to construct efficient and scalable algorithm to solve the learning task?
3. How to fully explore and exploit human computation?


This workshop aims to bring together researchers and practitioners interested in this area to share their perspectives, identify the challenges and opportunities, and discuss future research/application directions through invited talks, panel discussion, and oral/poster presentations.


We invite papers solving the problems in social computing using machine learning methods, such as statistical methods, graphical models, graph mining methods, matrix factorization, learning to rank, optimization, temporal analysis methods, information visualization methods, transfer learning, and others.

Show more
View full details
Workshop

Challenges of Data Visualization

Barbara Hammer · Laurens van der Maaten · Fei Sha · Alexander Smola
Dec 11, 7:30 AM - 6:30 PM Westin: Alpine DE

The increasing amount and complexity of electronic data sets turns visualization into a key technology to provide an intuitive interface to the information. Unsupervised learning has developed powerful techniques for, e.g., manifold learning, dimensionality reduction, collaborative filtering, and topic modeling. However, the field has so far not fully appreciated the problems that data analysts seeking to apply unsupervised learning to information visualization are facing such as heterogeneous and context dependent objectives or
streaming and distributed data with different credibility. Moreover, the unsupervised learning field has hitherto failed to develop human-in-the-loop approaches to data visualization, even though such approaches including e.g. user relevance feedback are necessary to arrive at valid and interesting results.\par As a consequence, a number of challenges arise in the context of data visualization which cannot be solved by classical methods in the field:
\begin{itemize}
\item \emph{Methods have to deal with modern data formats and data sets:}\par\noindent How can the technologies be adapted to deal with streaming and probably non i.i.d. data sets? How can specific data formats be visualized appropriately such as spatio-temporal data, spectral data, data characterized by a general probably non-metric dissimilarity measure, etc.? How can we deal with heterogeneous data and different credibility? How can the dissimilarity measure be adapted to emphasize the aspects which are relevant for visualization?
\item \emph{Available techniques for specific tasks should be combined in a canonic way:}\par\noindent How can unsupervised learning techniques be combined to construct good visualizations? For instance, how can we effectively combine techniques for clustering, collaborative filtering, and topic modeling with dimensionality reduction to construct scatter plots that reveal the similarity between groups of data, movies, or documents? How can we arrive at context dependent visualization?
\item \emph{Visualization techniques should be accompanied by theoretical guarantees:}\par\noindent What are reasonable mathematical specifications of data visualization to shape this inherently ill-posed problem? Can this be controlled by the user in an efficient way? How can visualization be evaluated? What are reasonable benchmarks? What are reasonable evaluation measures?
\item \emph{Visualization techniques should be ready to use for users outside the field:}\par\noindent
Which methods are suited to users outside the field? How can the necessity be avoided to set specific technical parameters by hand or choose from different possible mathematical algorithms by hand? Can this necessity be substituted by intuitive interactive mechanisms which can be used by non-experts?
\end{itemize}
The goal of the workshop is to identify the state-of-the-art with respect to these challenges and to discuss possibilities to meet these demands with modern techniques.

Show more
View full details
Workshop

Charting Chemical Space: Challenges and Opportunities for AI and Machine Learning

Pierre Baldi · Klaus-Robert Müller · Gisbert Schneider
Dec 11, 7:30 AM - 6:30 PM Westin: Glacier

In spite of its central role and position between physics and biology, chemistry has remained in a somewhat backward state of informatics development compared to its two close relatives, primarily for historical reasons. Computers, open public databases, and large collaborative projects have become the pervasive hallmark of research in physics and biology, but are still at an early stage of development in chemistry. Recently, however, large repositories with millions of small molecules have become freely available, and equally large repositories of chemical reactions have also become available, albeit not freely. These data create a wealth of interesting informatics and machine learning challenges to efficiently store, search, and predict the physical, chemical, and biological properties of small molecules and reactions and chart ``chemical space'', with significant scientific and technological impacts.

Small organic molecules, in particular, with at most a few dozen atoms play a fundamental role in chemistry, biology, biotechnology, and pharmacology. They can be used, for instance, as combinatorial building blocks for chemical synthesis, as molecular probes for perturbing and analyzing biological systems in chemical genomics and systems biology, and for the screening, design, and discovery of new drugs and other useful compounds. Huge arrays of new small molecules can be produced in a relatively short time. Chemoinformatics methods must be able to cope with the inherently graphical, non-vectorial, nature of raw chemical information on small organic molecules and organic reactions, and the vast combinatorial nature of chemical space, containing over 1060 possible small organic molecules. Recently described grand challenges for chemoinformatics include: (1) overcoming stalled drug discovery; (2) helping to develop green chemistry and address global warming; (3) understanding life from a chemical perspective; and (4) enabling the network of the world\'s chemical and biological information to be accessible and interpretable.

This one day workshop will provide a forum to brainstorm about these issues, explore the role and contributions machine learning methods can make to chemistry and chemoinformatics, and hopefully foster new ideas and collaborations.

Show more
View full details
Workshop

Numerical Mathematics Challenges in Machine Learning

Matthias Seeger · Suvrit Sra
Dec 11, 7:30 AM - 6:30 PM Hilton: Diamond Head

Most machine learning (ML) methods are based on numerical mathematics (NM)
concepts, from differential equation solvers over dense matrix factorizations
to iterative linear system and eigen-solvers. For problems of moderate size,
NM routines can be invoked in a black-box fashion. However, for a growing
number of real-world ML applications, this separation is insufficient and
turns out to be a limit on further progress.\par

The increasing complexity of real-world ML problems must be met with layered
approaches, where algorithms are long-running and reliable components rather
than stand-alone tools tuned individually to each task at hand. Constructing
and justifying dependable reductions requires at least some awareness about NM
issues. With more and more basic learning problems being solved sufficiently
well on the level of prototypes, to advance towards real-world practice the
following key properties must be ensured: scalability, reliability, and
numerical robustness. \par

By inviting numerical mathematics researchers with interest in both numerical
methodology and real problems in applications close to machine learning, we
will probe realistic routes out of the prototyping sandbox. Our aim is to
strengthen dialog between NM, signal processing, and ML. Speakers are briefed
to provide specific high-level examples of interest to ML and to point out
accessible software. We will initiate discussions about how to best bridge gaps
between ML requirements and NM interfaces and terminology. \par

The workshop will reinforce the community's awakening attention towards
critical issues of numerical scalability and robustness in algorithm design
and implementation. Further progress on most real-world ML problems is
conditional on good numerical practices, understanding basic robustness and
reliability issues, and a wider, more informed integration of good numerical
software. As most real-world applications come with reliability and scalability
requirements that are by and large ignored by most current ML methodology, the
impact of pointing out tractable ways for improvement is substantial.

\par\noindent Target audience: \par

Our workshop is targeted towards practitioners from NIPS, but is of interest
to numerical linear algebra researchers as well.

Show more
View full details
Workshop

Transfer Learning Via Rich Generative Models.

Russ Salakhutdinov · Ryan Adams · Josh Tenenbaum · Zoubin Ghahramani · Tom Griffiths
Dec 11, 7:30 AM - 6:30 PM Westin: Emerald A

Intelligent systems must be capable of transferring previously-learned abstract knowledge to new concepts, given only a small or noisy set of examples. This transfer of higher order information to new learning tasks lies at the core of many problems in the fields of computer vision, cognitive science, machine learning, speech perception and natural language processing.

\par Over the last decade, there has been considerable progress in
developing cross-task transfer (e.g., multi-task learning and
semi-supervised learning) using both discriminative and generative approaches. However, many existing learning systems today can not cope with new tasks for which they have not been specifically trained. Even when applied to related tasks, trained systems often display unstable behavior. More recently, researchers have begun developing new approaches to building rich generative models that are capable of extracting useful, high-level structured representations from high-dimensional sensory input. The learned representations have been shown to give promising results for solving a multitude of novel learning tasks, even though these tasks may be unknown when the generative model is being trained. A few notable examples include learning of Deep Belief Networks, Deep Boltzmann Machines, deep nonparametric Bayesian models, as well as Bayesian models inspired by human learning. \

\par``Learning to learn'' new concepts via rich generative models has emerged as one of the most promising areas of research in both machine learning and cognitive science. Although there has been recent progress, existing computational models are still far from being able to represent, identify and learn the wide variety of possible patterns and structure in real-world data. The goal of this workshop is to assess the current state of the field and explore new directions in both theoretical foundations and empirical applications.

Show more
View full details
Workshop

Low-rank Methods for Large-scale Machine Learning

Arthur Gretton · Michael W Mahoney · Mehryar Mohri · Ameet S Talwalkar
Dec 11, 7:30 AM - 6:30 PM Westin: Alpine BC

Today's data-driven society is full of large-scale datasets. In the context of machine learning, these datasets are often represented by large matrices representing either a set of real-valued features for each point or pairwise similarities between points. Hence, modern learning problems in computer vision, natural language processing, computational biology, and other areas often face the daunting task of storing and operating on matrices with thousands to millions of entries. An attractive solution to this problem involves working with low-rank approximations of the original matrix. Low-rank approximation is at the core of widely used algorithms such as Principle Component Analysis, Multidimensional Scaling, Latent Semantic Indexing, and manifold learning. Furthermore, low-rank matrices appear in a wide variety of applications including lossy data compression, collaborative filtering, image processing, text analysis, matrix completion and metric learning. In this workshop, we aim to survey recent work on matrix approximation with an emphasis on usefulness for practical large-scale machine learning problems. We aim to provide a forum for researchers to discuss several important questions associated with low-rank approximation techniques.

Show more
View full details
Workshop

Learning on Cores, Clusters, and Clouds

Alekh Agarwal · Lawrence Cayton · Ofer Dekel · John Duchi · John Langford
Dec 11, 7:30 AM - 6:30 PM Hilton: Mt Currie South

In the current era of web-scale datasets, high throughput biology and astrophysics, and multilanguage machine translation, modern datasets no longer fit on a single computer and traditional machine learning algorithms often have prohibitively long running times. Parallelized and distributed machine learning is no longer a luxury; it has become a necessity. Moreover, industry leaders have already declared that clouds are the future of computing, and new computing platforms such as Microsoft's Azure and Amazon's EC2 are bringing distributed computing to the masses. The machine learning community has been slow to react to these important trends in computing, and it is time for us to step up to the challenge.

While some parallel and distributed machine learning algorithms already exist, many relevant issues are yet to be addressed. Distributed learning algorithms should be robust to node failures and network latencies, and they should be able to exploit the power of asynchronous updates. Some of these issues have been tackled in other fields where distributed computation is more mature, such as convex optimization and numerical linear algebra, and we can learn from their successes and their failures.

The workshop aims to draw the attention of machine learning researchers to this rich and emerging area of problems and to establish a community of researchers that are interested in distributed learning. We would like to define a number of common problems for distributed learning (online/batch, synchronous/asynchronous, cloud/cluster/multicore) and to encourage future research that is comparable and compatible. We also hope to expose the learning community to relevant work in fields such as distributed optimization and distributed linear algebra. The day-long workshop aims to identify research problems that are unique to distributed learning.

The target audience includes leading researchers from academia and industry that are interested in distributed and large-scale learning.

Show more
View full details
Workshop

Advances in Activity-Dependent Synaptic Plasticity

Paul W Munro
Dec 11, 7:30 AM - 6:30 PM Westin: Callaghan

Since Hebb articulated his Neurophysiological Postulate in 1949 up to the present day, the relationship between synapse modification and neuronal activity has been the subject of enormous interest. Laboratory studies have revealed phenomena such as LTP, LTD, and STDP. Theoretical developments have both inspired studies and been inspired by them. The intent of the proposed workshop is to foster communication among researchers in this field. The workshop is intended to be of interest to experimentalists and modelers studying plasticity from the neurobiological level to the cognitive level. The workshop is targeted toward researchers in this area, hopefully drawing a 50/50 mix of experimental results and theoretical ideas. Another goal is to bring together established researchers with grad students and postdocs.

Show more
View full details
Workshop

Networks Across Disciplines: Theory and Applications

Edo M Airoldi · Anna Goldenberg · Jure Leskovec · Quaid Morris
Dec 11, 7:30 AM - 6:30 PM Westin: Nordic

Networks are used across a wide variety of disciplines to describe interactions between entities --- in sociology these are relations between people, such as friendships (Facebook); in biology --- physical interactions between genes; and many others: the Internet, sensor networks, transport networks, ecological networks just to name a few. Computer scientists, physicists and mathematicians search for mechanisms and models that could explain observed networks and analyze their properties. The research into theoretical underpinnings of networks is very heterogeneous and the breadth of existing and possible applications is vast. Yet, many of such works are only known within their specific areas. Many books and articles are written on the subject making it hard to tease out the important unsolved questions especially as richer data becomes available. These issues call for collaborative environment, where scientists from a wide variety of fields could exchange their ideas: theoreticians could learn about new questions in network applications, whereas applied researchers could learn about potential new solutions for their problems. Researchers in different disciplines approach network modeling from complementary angles. For example, in physics, scientists create generative models with the fewest number of parameters and are able to study average behavior of very large networks, whereas in statistics and social science, the focus is often on richer models and smaller networks. Continuous information exchange between these groups can facilitate faster progress in the field of network modelling and analysis.

Show more
View full details
Workshop

Learning and Planning from Batch Time Series Data

Daniel Lizotte · Michael Bowling · Susan Murphy · Joelle Pineau · Sandeep Vijan
Dec 11, 7:30 AM - 6:30 PM Hilton: Sutcliffe A

Intended Audience: Researchers interested in models and algorithms for learning and planning from batches of time series, including those interested in batch reinforcement learning, dynamic Bayes nets, dynamical systems, and similar topics. Also, researchers interested in any applications where such algorithms and models can be of use, for example in medicine and robotics.

Overview: Consider the problem of learning a model or control policy from a batch of trajectories collected a priori that record observations over time. This scenario presents an array of practical challenges. For example, batch data are often noisy and/or partially missing. The data may be high-dimensional because the data collector may not know a priori which observations are useful for decision making. In fact, a data collector may not even have a clear idea of which observations should be used to measure the quality of a policy. Finally, even given low-noise data with a few useful state features and a well-defined objective, the performance of the learner can only be evaluated using the same batch of data that was available for learning.

The above challenges encountered in batch learning and planning from time series data are beginning to be addressed by adapting techniques that have proven useful in regression and classification. Careful modelling, filtering, or smoothing could mitigate noisy or missing observations. Appropriate regularization could be used for feature selection. Methods from multi-criterion optimization could be useful for choosing a performance measure. Specialized data re-sampling methods could yield valid assessments of policy performance when gathering new on-policy data is not possible.

As applications of reinforcement learning and related methods have become more widespread, practitioners have encountered the above challenges along with many others, and they have begun to develop and adapt a variety of methods from other areas of machine learning and statistics to address these challenges. The goal of our workshop is to further this development by bringing together researchers who are interested in learning and planning methods for batch time series data and researchers who are interested in applying these methods in medicine, robotics, and other relevant domains. Longer term we hope to jump-start synergistic collaborations aimed at improving the quality of learning and planning from training sets of time series for use in medical applications.

Show more
View full details
Workshop

Predictive Models in Personalized Medicine

Faisal Farooq · Glenn Fung · Romer Rosales · Shipeng Yu · Jude W Shavlik · Balaji R Krishnapuram · Raju Kucherlapati
Dec 11, 7:30 AM - 6:30 PM Hilton: Sutcliffe B

The purpose of this cross-discipline workshop is to bring together machine learning and healthcare researchers interested in problems and applications of predictive models in the field of personalized medicine. The goal of the workshop will be to bridge the gap between the theory of predictive models and the applications and needs of the healthcare community. There will be exchange of ideas, identification of important and challenging applications and discovery of possible synergies. Ideally this will spur discussion and collaboration between the two disciplines and result in collaborative grant submissions. The emphasis will be on the mathematical and engineering aspects of predictive models and how it relates to practical medical problems.

Although related in a broad sense, the workshop does not directly overlap with the fields of Bioinformatics and Biostatistics. Although, predictive modeling for healthcare has been explored by biostatisticians for several decades, this workshop focuses on substantially different needs and problems that are better addressed by modern machine learning technologies. For example, how should we organize clinical trials to validate the clinical utility of predictive models for personalized therapy selection? The traditional bio-statistical approach for running trials on a large cohort of homogeneous patients would not suffice for the new paradigm and new methods are needed. On the other hand bioinformatics typically deals with the analysis of genomic and proteomic data to answer questions of relevance to basic science. For example, identification of sequences in the genome corresponding to genes, identification of gene regulatory networks etc. This workshop does not focus on issues of basic science; rather, we focus on predictive models that combine all available patient data (including imaging, pathology, lab, genomics etc.) to impact point of care decision making.

More recently, as part of American Re-investment and Recovery Act (ARRA), the US government set aside significant amounts of grant funds for cross-disciplinary research in use of information technology in improving health outcomes, quality of care and selection of therapy.

The workshop program will consist of presentations by invited speakers from both machine learning and personalized medicine fields and by authors of extended abstracts submitted to the workshop. In addition, there will be a slot for a panel discussion to identify important problems, applications and synergies between the two scientific disciplines.

Show more
View full details
Workshop

Machine Learning meets Computational Photography

Stefan Harmeling · Michael Hirsch · Bill Freeman · Peyman Milanfar
Dec 11, 7:30 AM - 6:30 PM Hilton: Black Tusk

Computational photography (CP) is a new field that explores and is about to redefine how we take photographs and videos. Applications of CP are not only everyday'' photography but also new methods for scientific imaging, such as microscopy, biomedical imaging, and astronomical imaging, and can thus be expected to have a significant impact in many areas. <br> <br>There is an apparent convergence of methods, what we have traditionally calledimage processing'', and recently many works in machine vision, all of which seem to be addressing very much the same, if not tightly related problems. These include deblurring, denoising, and enhancement algorithms of various kinds. What do we learn from this convergence and its application to CP? Can we create more contact between the practitioners of these fields, who often do not interact? Does this convergence mean that the fields are intellectually shrinking to the same point, or expanding and hence overlapping with each other more?

Besides discussing such questions, the goal of this workshop is two-fold: (i) to present the current approaches, their possible limitations, and open problems of CP to the NIPS community, and (ii) to foster interaction between researchers from machine learning, neuro science and CP to advance the state of the art in CP.

The key of the existing CP approaches is to combine (i) creative hardware designs with (ii) sophisticated computations, such as e.g. new approaches to blind deconvolution. This interplay between both hardware and software is what makes CP an ideal real-world domain for the whole NIPS community, who could contribute in various ways to its advancement, be it by enabling new imaging devices that are possible due to the latest machine learning methods or by new camera and processing designs that are inspired by our neurological understanding of natural visual systems.

Thus the target group of participants are researchers from the whole NIPS community (machine learning and neuro science) and researchers working on CP and related fields.

Show more
View full details
Workshop

Discrete Optimization in Machine Learning: Structures, Algorithms and Applications

Andreas Krause · Pradeep Ravikumar · Jeffrey A Bilmes · Stefanie Jegelka
Dec 11, 7:30 AM - 6:30 PM Hilton: Cheakmus

Solving optimization problems with ultimately discrete solutions is becoming increasingly important in machine learning: At the core of statistical machine learning is to infer conclusions from data, and when the variables underlying the data are discrete, both the tasks of inferring the model from data, as well as performing predictions using the estimated model are discrete optimization problems. Many of the resulting optimization problems are NP-hard, and typically, as the problem size increases, standard off-the-shelf optimization procedures become intractable.

Fortunately, most discrete optimization problems that arise in machine learning have specific structure, which can be leveraged in order to develop tractable exact or approximate optimization procedures. For example, consider the case of a discrete graphical model over a set of random variables. For the task of prediction, a key structural object is the marginal polytope,'' a convex bounded set characterized by the underlying graph of the graphical model. Properties of this polytope, as well as its approximations, have been successfully used to develop efficient algorithms for inference. For the task of model selection, a key structural object is the discrete graph itself. Another problem structure is sparsity: While estimating a high-dimensional model for regression from a limited amount of data is typically an ill-posed problem, it becomes solvable if it is known that many of the coefficients are zero. Another problem structure, submodularity, a discrete analog of convexity, has been shown to arise in many machine learning problems, including structure learning of probabilistic models, variable selection and clustering. One of the primary goals of this workshop is to investigate how to leverage such structures. <br> <br>There are two major classes of approaches towards solving such discrete optimization problems machine learning: Combinatorial algorithms and continuous relaxations. In the first, the discrete optimization problems are solved directly in the discrete constraint space of the variables. Typically these take the form of search based procedures, where the discrete structure is exploited to limit the search space. In the other, the discrete problems are transformed into continuous, often tractable convex problems by relaxing the integrality constraints. The exact fractional solutions are thenrounded'' back to the discrete domain. Another goal of this workshop is to bring researchers in these two communities together in order to discuss (a) tradeoffs and respective benefits of the existing approaches, and (b) problem structures suited to the respective approaches. For instance submodular problems can be tractably solved using combinatorial algorithms; similarly, in certain cases, the continuous relaxations yield discrete solutions that are either exact or with objective within a multiplicative factor of the true solution. In addition to studying discrete structures and algorithms, the workshop will put a particular emphasis on novel applications of discrete optimization in machine learning.
\par\noindent
Please see workshop webpage (http://www.discml.cc) for the final schedule.

Show more
View full details
Workshop

New Directions in Multiple Kernel Learning

Marius Kloft · Ulrich Rueckert · Cheng Soon Ong · Alain Rakotomamonjy · Soeren Sonnenburg · Francis Bach
Dec 11, 7:30 AM - 6:30 PM Hilton: Mt Currie North

Research on Multiple Kernel Learning (MKL) has matured to the point where efficient systems can be applied out of the box to various application domains. In contrast to last year's workshop, which evaluated the achievements of MKL in the past decade, this workshop looks beyond the standard setting and investigates new directions for MKL.

In particular, we focus on two topics:
1. There are three research areas, which are closely related, but have traditionally been treated separately: learning the kernel, learning distance metrics, and learning the covariance function of a Gaussian process. We therefore would like to bring together researchers from these areas to find a unifying view, explore connections, and exchange ideas.
2. We ask for novel contributions that take new directions, propose innovative approaches, and take unconventional views. This includes research, which goes beyond the limited classical sum-of-kernels setup, finds new ways of combining kernels, or applies MKL in more complex settings.

Taking advantage of the broad variety of research topics at NIPS, the workshop aims to foster collaboration across the borders of the traditional multiple kernel learning community.

Show more
View full details