Skip to yearly menu bar Skip to main content



Workshops
Alekh Agarwal · Sasha Rakhlin

[ Montebajo: Basketball Court ]

Since its early days, the field of Machine Learning has focused on
developing computationally tractable algorithms with good learning
guarantees. The vast literature on statistical learning theory has led
to a good understanding of how the predictive performance of different
algorithms improves as a function of the number of training
samples. By the same token, the well-developed theories of
optimization and sampling methods have yielded efficient computational
techniques at the core of most modern learning methods. The separate
developements in these fields mean that given an algorithm we have a
sound understanding of its statistical and computational
beahvior. However, there hasn't been much joint study of the
computational and statistical complexities of learinng, as a
consequence of which, little is known about the interaction and
trade-offs between statistical accuracy and computational
complexity. Indeed a systematic joint treatment can answer some very
interesting questions: what is the best attainable statistical error
given a finite computational budget? What is the best learning method
to use given different computational constraints and desired
statistical yardsticks? Is it the case that simple methods outperform
complex ones in computationally impoverished scenarios?


At its core, the PAC framework aims to study learning through the lens
of computation. …

Greg Shakhnarovich · Dhruv Batra · Brian Kulis · Kilian Q Weinberger

[ Melia Sierra Nevada: Guejar ]

he notion of similarity (or distance) is central in many problems in machine learning: information retrieval, nearest-neighbor based prediction, visualization of high-dimensional data, etc. Historically, similarity was estimated via a fixed distance function (typically Euclidean), sometimes engineered by hand using domain knowledge. Using statistical learning methods instead to learn similarity functions is appealing, and over the last decade this problem has attracted much attention in the community with several publications in NIPS, ICML, AISTATS, CVPR etc.

Much of this work, however, has focused on a specific, restricted approach: learning a Mahalanobis distance, under a variety of objectives and constraints. This effectively limits the setup to learning a linear embedding of the data.

In this workshop, we will look beyond this setup, and consider methods that learn non-linear embeddings of the data, either explicitly via non-linear mappings or implicitly via kernels. We will especially encourage discussion of methods that are suitable for large-scale problems increasingly facing practitioner of learning methods: large number of examples, high dimensionality of the original space, and/or massively multi-class problems (e.g. Classification with 10,000+ categories, 10,000,000 image of ImageNet dataset).

Our goals are to


1. Create a comprehensive understanding of the state-of-the-art in similarity learning, via presentation …

Robert Williamson · John Langford · Ulrike von Luxburg · Mark Reid · Jennifer Wortman Vaughan

[ Melia Sierra Nevada: Dilar ]

What:
The workshop proposes to focus on relations between machine learning problems. We use “relation” quite generally to include (but not limit ourselves to) notions such as: one type of problem being viewed special case of another type (e.g., classification as thresholded probability estimation); reductions between learning problems (e.g., transforming ranking problems into classification problems); and the use of surrogate losses (e.g., replacing misclassification loss with some other, convex loss). We also include relations between sets of learning problems, such as those studied in the (old) theory of “comparison of experiments”, as well as recent connections between machine learning problems and what could be construed as "economic learning problems" such as prediction markets and forecast elicitation.


Why: The point of studying relations between machine learning problems is that it stands a reasonable chance of being a way to be able to understand the field of machine learning as a whole. It could serve to prevent re-invention, and rapidly facilitate the growth of new methods. The motivation is not dissimilar to Hal Varian’s notion of combinatorial innovation. Another analogy is to consider the development of function theory in the 19th century and observe the rapid advances made possible by the development …

Tatiana V. Guy · Miroslav Karny · David H Wolpert · Alessandro VILLA · David Rios Insua

[ Melia Sol y Nieve: Snow ]

NIPS 2011 Workshop Proposal
Title: Decision Making with Multiple Imperfect Decision Makers
Date: December, 2011
Organisers:
Tatiana V. Guy Miroslav Kárný, Institute of Information Theory and Automation, Czech Republic
David Wolpert, NASA Ames Research Center, USA
David Rios Insua, Royal Academy of Sciences, Spain
Alessandro E.P. Villa, University of Lausanne, Switzerland

OVERVIEW
The prescriptive Bayesian theory of dynamic decision making under uncertainty and incomplete knowledge has reached a high level of maturity. It is well-supported by efficient and theoretically justified algorithms respecting different physical constraints present in applications. The research repeatedly stresses the influence of imperfectness, i.e. limited cognitive and evaluative resources of decision makers that should be considered. Decision making with imperfect decision makers, however, lacks a firm prescriptive ground. This problem emerges repeatedly and seems of difficult solution. For instance, i) the consistent theory of incomplete Bayesian games cannot be applied by imperfect participants; ii) a desirable incorporation of “deliberation effort” into the design of decision-making strategies remains unsolved. At the same time, real societal, biological, economical and engineered systems practically cope with the imperfectness and many descriptive studies confirm their efficiency. The need to understand and remove this discrepancy motivated the preceding NIPS 2010 workshop Decision Making …

Karsten Borgwardt · Oliver Stegle · Shipeng Yu · Glenn Fung · Faisal Farooq · Balaji R Krishnapuram

[ Melia Sol y Nieve: Slalom ]

Background

Technological advances to profile medical patients have led to a change of paradigm in medical prognoses. Medical diagnostics carried out by medical experts is increasingly complemented by large-scale data collection and quantitative genome-scale molecular measurements. Data that are already available as of today or are to enter medical practice in the near future include personal medical records, genotype information, diagnostic tests, proteomics and other emerging ‘omics’ data types.

This rich source of information forms the basis of future medicine and personalized medicine in particular. Predictive methods for personalized medicine allow to integrate these data specific for each patient (genetics, exams, demographics, imaging, lab, genomic etc.), both for improved prognosis and to design an individual-specific optimal therapy.

However, the statistical and computational approaches behind these analyses are faced with a number of major challenges. For example, it is necessary to identify and correcting for structured influences within the data; dealing with missing data and the statistical challenges that come along with carrying out millions of statistical tests. Also, to render these methods useful in practice computational efficiency and scalability to large-scale datasets are an integral requirement. Finally, any computational approach needs to be tightly integrated with medical practice to be …

Nando de Freitas · Roman Garnett · Frank R Hutter · Michael A Osborne

[ Melia Sierra Nevada: Hotel Bar ]

Recently, we have witnessed many important advances in learning approaches for sequential decision making. These advances have occurred in different communities, who refer to the problem using different terminology: Bayesian optimization, experimental design, bandits (x-armed bandits, contextual bandits, Gaussian process bandits), active sensing, personalized recommender systems, automatic algorithm configuration, reinforcement learning and so on. These communities tend to use different methodologies too. Some focus more on practical performance while others are more concerned with theoretical aspects of the problem. As a result, they have derived and engineered a diverse range of methods for trading off exploration and exploitation in learning. For these reasons, it is timely and important to bring these communities together to identify differences and commonalities, to propose common benchmarks, to review the many practical applications (interactive user interfaces, automatic tuning of parameters and architectures, robotics, recommender systems, active vision, and more), to narrow the gap between theory and practice and to identify strategies for attacking high dimensionality.
Michael Hirsch · Sarah Bridle · Bernhard Schölkopf · Phil Marshall · Stefan Harmeling · Mark Girolami

[ Melia Sierra Nevada: Monachil ]

Cosmology aims at the understanding of the universe and its evolution through scientific observation and experiment and hence addresses one of the most profound questions of human mankind. With the establishment of robotic telescopes and wide sky surveys cosmology already now faces the challenge of evaluating vast amount of data. Multiple projects will image large fractions of the sky in the next decade, for example the Dark Energy Survey will culminate in a catalogue of 300 million objects extracted from peta-bytes of observational data. The importance of automatic data evaluation and analysis tools for the success of these surveys is undisputed.

Many problems in modern cosmological data analysis are tightly related to fundamental problems in machine learning, such as classifying stars and galaxies, and cluster finding of dense galaxy populations. Other typical problems include data reduction, probability density estimation, how to deal with missing data and how to combine data from different surveys.

An increasing part of modern cosmology aims at the development of new statistical data analysis tools and the study of their behaviour and systematics often not aware of recent developments in machine learning and computational statistics.

Therefore, the objectives of this workshop are two-fold:

(i) The workshop …

Gal Elidan · Zoubin Ghahramani · John Lafferty

[ Melia Sierra Nevada: Genil ]

From high-throughput biology and astronomy to voice analysis and medical diagnosis, a wide variety of complex domains are inherently continuous and high dimensional. The statistical framework of copulas offers a flexible tool for modeling highly non-linear multivariate distributions for continuous data. Copulas are a theoretically and practically important tool from statistics that explicitly allow one to separate the dependency structure between random variables from their marginal distributions. Although bivariate copulas are a widely used tool in finance, and have even been famously accused of "bringing the world financial system to its knees" (Wired Magazine, Feb. 23, 2009), the use of copulas for high dimensional data is in its infancy.

While studied in statistics for many years, copulas have only recently been noticed by a number of machine learning researchers, with this "new" tool appearing in the recent leading machine learning conferences (ICML, UAI and NIPS). The goal of this workshop is to promote the further understanding and development of copulas for the kinds of complex modeling tasks that are the focus of machine learning. Specifically, the goals of the workshop are to:

* draw the attention of machine learning researchers to the
important framework of copulas

* provide a theoretical …

Melissa K Carroll · Guillermo Cecchi · Kai-min K Chang · Moritz Grosse-Wentrup · James Haxby · Georg Langs · Anna Korhonen · Bjoern Menze · Brian Murphy · Janaina Mourao-Miranda · Vittorio Murino · Francisco Pereira · Irina Rish · Mert Sabuncu · Irina Simanova · Bertrand Thirion

[ Melia Sol y Nieve: Aqua ]

https://sites.google.com/site/mlini2011/

SUBMISSION DEADLINE: October 17, 2011

Primary contacts:

* Moritz Grosse-Wentrup moritzgw@ieee.org
* Georg Langs langs@csail.mit.edu
* Brian Murphy brian.murphy@unitn.it
* Irina Rish rish@us.ibm.com


MOTIVATION:

Modern multivariate statistical methods have been increasingly applied to various problems in neuroimaging, including “mind reading”, “brain mapping”, clinical diagnosis and prognosis. Multivariate pattern analysis (MVPA) is a promising machine-learning approach for discovering complex relationships between high-dimensional signals (e.g., brain images) and variables of interest (e.g., external stimuli and/or brain's cognitive states). Modern multivariate regularization approaches can overcome the curse of dimensionality and produce highly predictive models even in high-dimensional, small-sample scenarios typical in neuroimaging (e.g., 10 to 100 thousands of voxels and just a few hundreds of samples).

However, despite the rapidly growing number of neuroimaging applications in machine learning, its impact on how theories of brain function are construed has received little consideration. Accordingly, machine-learning techniques are frequently met with skepticism in the domain of cognitive neuroscience. In this workshop, we intend to investigate the implications that follow from adopting machine-learning methods for studying brain function. In particular, this concerns the question how these methods may be used to represent cognitive states, and what ramifications this has for consequent theories of cognition. Besides …

Joseph E Gonzalez · Sameer Singh · Graham Taylor · James Bergstra · Alice Zheng · Misha Bilenko · Yucheng Low · Yoshua Bengio · Michael Franklin · Carlos Guestrin · Andrew McCallum · Alexander Smola · Michael Jordan · Sugato Basu

[ Montebajo: Theater ]

Driven by cheap commodity storage, fast data networks, rich structured models, and the increasing desire to catalog and share our collective experiences in real-time, the scale of many important learning problems has grown well beyond the capacity of traditional sequential systems. These “Big Learning” problems arise in many domains including bioinformatics, astronomy, recommendation systems, social networks, computer vision, web search and online advertising. Simultaneously, parallelism has emerged as a dominant widely used computational paradigm in devices ranging from energy efficient mobile processors, to desktop supercomputers in the form of GPUs, to massively scalable cloud computing services. The Big Learning setting has attracted intense interest across industry and academia, with active research spanning diverse fields ranging from machine learning and databases to large scale distributed systems and programming languages. However because the Big Learning setting is being studied by experts of these various communities, there is a need for a common venue to discuss recent progress, to identify pressing new challenges, and to exchange new ideas.


This workshop aims to:

* Bring together parallel and distributed system builders in industry and academia, machine learning experts, and end users to identify the key challenges, opportunities, and myths of Big Learning. What REALLY …

Yoshua Bengio · Adam Coates · Yann LeCun · Nicolas Le Roux · Andrew Y Ng

[ Telecabina: Movie Theater ]

In recent years, there has been a lot of interest in algorithms that learn feature hierarchies from unlabeled data. Deep learning methods such as deep belief networks, sparse coding-based methods, convolutional networks, and deep Boltzmann machines, have shown promise and have already been successfully applied to a variety of tasks in computer vision, audio processing, natural language processing, information retrieval, and robotics. In this workshop, we will bring together researchers who are interested in deep learning and unsupervised feature learning, review the recent technical progress, discuss the challenges, and identify promising future research directions.

Through invited talks, panels and discussions (see program schedule), we will attempt to address some of the more controversial topics in deep learning today, such as whether hierarchical systems are more powerful, the issues of scalability of deep learning, and what principles should guide the design of objective functions used to train these models.

The workshop will also invite paper submissions on the development of unsupervised feature learning and deep learning algorithms, theoretical foundations, inference and optimization, semi-supervised and transfer learning, and applications of deep learning and unsupervised feature learning to real-world tasks. Papers will be presented as oral or poster presentations (with a short spotlight …

Suvrit Sra · Stephen Wright · Sebastian Nowozin

[ Melia Sierra Nevada: Dauro ]

Dear NIPS Workshop Chairs,

We propose to organize the workshop

OPT2011 "Optimization for Machine Learning."


This workshop builds on precedent established by our previously very well-received NIPS workshops, OPT2008--OPT2010 (Urls are cited in the last box)

The OPT workshops enjoyed packed (to overpacked) attendance---and this enthusiastic reception underscores the strong interest, relevance, and importance enjoyed by optimization in the ML community.

This continued interest in optimization is readily acknowledged, because optimization lies at the heart of ML algorithms. Sometimes, classical textbook algorithms suffice, but the majority problems require tailored methods that
are based on a deeper understanding of the ML requirements. In fact, ML applications and researchers are driving some of the most cutting-edge developments in optimization today. The intimate relation of optimization with ML is the key motivation for our workshop, which aims to foster discussion,
discovery, and dissemination of the state-of-the-art in optimization.

FURTHER DETAILS
--------------------------------
Optimization is indispensable to many machine learning algorithms. What can we say beyond this obvious realization?

Previous talks at the OPT workshops have covered frameworks for convex programs (D. Bertsekas), the intersection of ML and optimization, especially in the area of SVM training (S. Wright), large-scale learning via stochastic
gradient methods and …

Yevgeny Seldin · Yacov Crammer · Nicolò Cesa-Bianchi · Francois Laviolette · John Shawe-Taylor

[ Melia Sol y Nieve: Ski ]

Model order selection, which is a trade-off between model complexity and its empirical data fit, is one of the fundamental questions in machine learning. It was studied in detail in the context of supervised learning with i.i.d. samples, but received relatively little attention beyond this domain. The goal of our workshop is to raise attention to the question of model order selection in other domains, share ideas and approaches between the domains, and identify perspective directions for future research. Our interest covers ways of defining model complexity in different domains, examples of practical problems, where intelligent model order selection yields advantage over simplistic approaches, and new theoretical tools for analysis of model order selection. The domains of interest span over all problems that cannot be directly mapped to supervised learning with i.i.d. samples, including, but not limited to, reinforcement learning, active learning, learning with delayed, partial, or indirect feedback, and learning with submodular functions.

An example of first steps in defining complexity of models in reinforcement learning, applying trade-off between model complexity and empirical performance, and analyzing it can be found in [1-4]. An intriguing research direction coming out of these works is simultaneous analysis of exploration-exploitation and model order …

Ameet S Talwalkar · Lester W Mackey · Mehryar Mohri · Michael W Mahoney · Francis Bach · Mike Davies · Remi Gribonval · Guillaume R Obozinski

[ Montebajo: Room 1 ]

Sparse representation and low-rank approximation are fundamental tools in fields as diverse as computer vision, computational biology, signal processing, natural language processing, and machine learning. Recent advances in sparse and low-rank modeling have led to increasingly concise descriptions of high dimensional data, together with algorithms of provable performance and bounded complexity. Our workshop aims to survey recent work on sparsity and low-rank approximation and to provide a forum for open discussion of the key questions concerning these dimensionality reduction techniques. The workshop will be divided into two segments, a "sparsity segment" emphasizing sparse dictionary learning and a "low-rank segment" emphasizing scalability and large data.

The sparsity segment will be dedicated to learning sparse latent representations and dictionaries: decomposing a signal or a vector of observations as sparse linear combinations of basis vectors, atoms or covariates is ubiquitous in machine learning and signal processing. Algorithms and theoretical analyses for obtaining these decompositions are now numerous. Learning the atoms or basis vectors directly from data has proven useful in several domains and is often seen from different view points: (a) as a matrix factorization problem with potentially some constraints such as pointwise nonnegativity, (b) as a latent variable model which can be …

Raymond Mooney · Trevor Darrell · Kate Saenko

[ Montebajo: Library ]

A growing number of researchers in computer vision have started to explore how language accompanying images and video can be used to aid interpretation and retrieval, as well as train object and activity recognizers. Simultaneously, an increasing number of computational linguists have begun to investigate how visual information can be used to aid language learning and interpretation, and to ground the meaning of words and sentences in perception. However, there has been very little direct interaction between researchers in these two distinct disciplines. Consequently, researchers in each area have a quite limited understanding of the methods in the other area, and do not optimally exploit the latest ideas and techniques from both disciplines when developing systems that integrate language and vision. Therefore, we believe the time is particularly opportune for a workshop that brings together researchers in both computer vision and natural-language processing (NLP) to discuss issues and ideas in developing systems that combine language and vision.

Traditional machine learning for both computer vision and NLP requires manually annotating images, video, text, or speech with detailed labels, parse-trees, segmentations, etc. Methods that integrate language and vision hold the promise of greatly reducing such manual supervision by using naturally co-occurring text …

Thomas Dietterich · J. Zico Kolter · Matthew A Brown

[ Melia Sierra Nevada: Guejar ]

Sustainability problems pose one of the greatest challenges facing society. Humans consume more than 16TW of power, about 84% of which comes from unsustainable fossil fuels. In addition to simply being a finite resource, the carbon released from fossil fuels is a significant driver of climate change and could have a profound impact on our environment. In addition to carbon releases, humans are modifying the ecosphere in many ways that are leading to large changes in the function and structure of ecosystems. These include huge releases of nitrogen from fertilizers, the collapse and extinction of many species, and the unsustainable harvest of natural resources (e.g., fish, timber). While sustainability problems span many disciplines, several tasks in this space are fundamentally prediction, modeling, and control tasks, areas where machine learning can have a large impact. Many of these problems also require the development of novel machine learning methods, particularly methods that can scale to very large spatio-temporal problem instances.

In recent years there has been growing interest in applying machine to problems of sustainability, spanning applications in energy, environmental management, and climate modeling. The goal of this workshop will be to bring together researchers from both the machine learning and sustainability …

Marcello Pelillo · Joachim M Buhmann · Tiberio Caetano · Bernhard Schölkopf · Larry Wasserman

[ Melia Sierra Nevada: Hotel Bar ]

The fields of machine learning and pattern recognition can arguably be considered as a modern-day incarnation of an endeavor which has challenged mankind since antiquity. In fact, fundamental questions pertaining to categorization, abstraction, generalization, induction, etc., have been on the agenda of mainstream philosophy, under different names and guises, since its inception. With the advent of modern digital computers and the availablity of enormous amount of raw data, these questions have now taken a computational flavor: instead of asking, say, "What is a dog?", we have started asking "How can one recognize a dog?" or, more technically, "What is an algorithm to recognize a dog?". Indeed, it has even been maintained that for a philosophical theory of knowledge to be respectable, it has to be described in computational terms (Thagard, 1988).

As it often happens with scientific research, in the early days of machine learning and pattern recognition there used to be a genuine interest around philosophical and conceptual issues (see, e.g., Minsky, 1961; Sutherland, 1968; Watanabe, 1969; Bongard, 1970; Nelson, 1976; Good, 1983), but over time the interest shifted almost entirely to technical and algorithmic aspects, and became driven mainly by practical applications. With this reality in mind, it …

Winter Mason · Jennifer Wortman Vaughan · Hanna Wallach

[ Telecabina: Movie Theater ]

Computational social science is an emerging academic research area at the intersection of computer science, statistics, and the social sciences, in which quantitative methods and computational tools are used to identify and answer social science questions. The field is driven by new sources of data from the Internet, sensor networks, government databases, crowdsourcing systems, and more, as well as by recent advances in computational modeling, machine learning, statistics, and social network analysis.

The related area of social computing deals with the mechanisms through which people interact with computational systems, examining how and why people contribute to crowdsourcing sites, and the Internet more generally. Examples of social computing systems include prediction markets, reputation systems, and collaborative filtering systems, all designed with the intent of capturing the wisdom of crowds.

Machine learning plays in important role in both of these research areas, but to make truly groundbreaking advances, collaboration is necessary: social scientists and economists are uniquely positioned to identify the most pertinent and vital questions and problems, as well as to provide insight into data generation, while computer scientists contribute significant expertise in developing novel, quantitative methods and tools.

The inaugural workshop brought together experts from fields as diverse as political …

John Blitzer · Corinna Cortes · Afshin Rostamizadeh

[ Melia Sierra Nevada: Monachil ]

A common assumption in theoretical models of learning such as the standard PAC model [20], as well as in the design of learning algorithms, is that training instances are drawn according to the same distribution as the unseen test examples. In practice, however, there are many cases where this assumption does not hold. There can be no hope for generalization, of course, when the training and test distributions vastly differ, but when they are less dissimilar, learning can be more successful. The main theme of this workshop is the theoretical, algorithmic, and empirical analysis of such cases where there is a mismatch between the training and test distributions. This includes the crucial scenario of domain adaptation where the training examples are drawn from a source domain distinct from the target domain from which the test examples are extracted, or the more general scenario of multiple source adaptation where training instances may have been collected from multiple source domains, all distinct from the target [13]. The topic of our workshop also covers other important problems such that of sample bias correction and has tight connections with other problems such as active learning where the active distribution corresponding to the learner's labeling …

Quoc V. Le · Marc'Aurelio Ranzato · Russ Salakhutdinov · Josh Tenenbaum · Andrew Y Ng

[ Montebajo: Library ]

The ability to learn abstract representations that support transfer to novel but related tasks lies at the core of solving many AI related tasks, including visual object recognition, information retrieval, speech perception, and language understanding. Hierarchical models that support inferences at multiple levels have been developed and argued as among the most promising candidates for achieving such goal. An important property of these models is that they can extract complex statistical dependencies from high-dimensional sensory input and efficiently learn latent variables by re-using and combining intermediate concepts, allowing these models to generalize well across a wide variety of tasks.

In the past few years, researchers across many different communities, from applied statistics to engineering, computer science and neuroscience, have proposed several hierarchical models that are capable of extracting useful, high-level structured representations. The learned representations have been shown to give promising results for solving a multitude of novel learning tasks. A few notable examples of such models include Deep Belief Networks, Deep Boltzmann Machines, sparse coding-based methods, nonparametric and parametric hierarchical Bayesian models.

Despite recent successes, many existing hierarchical models are still far from being able to represent, identify and learn the wide variety of possible patterns and structure in …

Rafael Ramirez · Darrell Conklin · Douglas Eck · Rif A. Saurous

[ Melia Sierra Nevada: Dilar ]

Motivation
With the current explosion and quick expansion of music in digital formats, and the computational power of modern systems, research on machine learning and music is gaining increasing popularity. As complexity of the problems investigated by researchers on machine learning and music increases, there is a need to develop new algorithms and methods to solve these problems. The focus of this workshop is on novel methods which take into account or benefit from musical structure. MML 2011 aims to build on the previous three successful MML editions, MML’08, MML’09 and MML’10.

Topic
It has been convincingly shown that many useful applications can be built using features derived from short musical snippets (chroma, MFCCs and related timbral features, augmented with tempo and beat representations). Given the great advances in these applications, higher level aspects of musical structure such as melody, harmony, phrasing and rhythm can now be given further attention, and we especially welcome contributions exploring these areas. The MML 2011 workshop intends to concentrate on machine learning algorithms employing higher level features and representations for content-based music processing.


Papers in all applications on music and machine learning are welcome, including but not limited to automatic classification of music (audio …

Jean-Marc Andreoli · Cedric Archambeau · Guillaume Bouchard · Shengbo Guo · Kristian Kersting · Scott Sanner · Martin Szummer · Paolo Viappiani · Onno Zoeter

[ Montebajo: Room 1 ]

Preference learning has been studied for several decades and has drawn increasing attention in recent years due to its importance in diverse applications such as web search, ad serving, information retrieval, recommender systems, electronic commerce, and many others. In all of these applications, we observe (often discrete) choices that reflect preferences among several entities, such as documents, webpages, products, songs etc. Since the observation then is partial, or censored, the goal is to learn the complete preference model, e.g. to reconstruct a general ordering function from observed preferences in pairs.

Traditionally, preference learning has been studied independently in several research areas, such as machine learning, data and web mining, artificial intelligence, recommendation systems, and psychology among others, with a high diversity of application domains such as social networks, information retrieval, web search, medicine, biology, etc. However, contributions developed in one application domain can, and should, impact other domains. One goal of this workshop is to foster this type of interdisciplinary exchange, by encouraging abstraction of the underlying problem (and solution) characteristics during presentation and discussion. In particular, the workshop is motivated by the two following lines of research:

1. Large scale preference learning with sparse data: There has been a …

Jean-Philippe Vert · Gunnar Rätsch · Yanjun Qi · Tomer Hertz · Anna Goldenberg · Christina Leslie

[ Melia Sierra Nevada: Genil ]

The field of computational biology has seen dramatic growth over the past few years, both in terms of new available data, new scientific questions, and new challenges for learning and inference. In particular, biological data are often relationally structured and highly diverse, well-suited to approaches that combine multiple weak evidence from heterogeneous sources. These data may include sequenced genomes of a variety of organisms, gene expression data from multiple technologies, protein expression data, protein sequence and 3D structural data, protein interactions, gene ontology and pathway databases, genetic variation data (such as SNPs), and an enormous amount of textual data in the biological and medical literature. New types of scientific and clinical problems require the development of novel supervised and unsupervised learning methods that can use these growing resources. Furthermore, next generation sequencing technologies are yielding terabyte scale data sets that require novel algorithmic solutions.

The goal of this workshop is to present emerging problems and machine learning techniques in computational biology. We invited several speakers from the biology/bioinformatics community who will present current research problems in bioinformatics, and we will invite contributed talks on novel learning approaches in computational biology. We encourage contributions describing either progress on new bioinformatics problems …

Emily Fox · Ryan Adams

[ Melia Sierra Nevada:Dauro ]

Assessing the State of Bayesian Nonparametric Machine Learning

Bayesian nonparametric methods are an expanding part of the machine learning landscape. Proponents of Bayesian nonparametrics claim that these methods enable one to construct models that can scale their complexity with data, while representing uncertainty in both the parameters and the structure. Detractors point out that the characteristics of the models are often not well understood and that inference can be unwieldy. Relative to the statistics community, machine learning prac- titioners of Bayesian nonparametrics frequently do not leverage the representation of uncertainty that is inherent in the Bayesian framework. Neither do they perform the kind of analysis — both empirical and theoretical — to set skeptics at ease. In this workshop we hope to bring a wide group together to constructively discuss and address these goals and shortcomings.

Please see the following website for further information:
http://people.seas.harvard.edu/~rpa/nips2011npbayes.html

Antoine Bordes · Jason E Weston · Ronan Collobert · Leon Bottou

[ Melia Sol y Nieve: Ski ]

A key ambition of AI is to render computers able to evolve in and interact with the real world. This can be made possible only if the machine is able to produce a correct interpretation of its available modalities (image, audio, text, etc.), upon which it would then build a reasoning to take appropriate actions. Computational linguists use the term semantics'' to refer to the possible interpretations (concepts) of natural language expressions, and showed some interest inlearning semantics'', that is finding (in an automated way) these interpretations. However, ``semantics'' are not restricted to natural language modality, and are also pertinent for speech or vision modalities. Hence, knowing visual concepts and common relationships between them would certainly bring a leap forward in scene analysis and in image parsing akin to the improvement that language phrase interpretations would bring to data mining, information extraction or automatic translation, to name a few.

Progress in learning semantics has been slow mainly because this involves sophisticated models which are hard to train, especially since they seem to require large quantities of precisely annotated training data. However, recent advances in learning with weak and limited supervision lead to the emergence of a new body of …

Michael Hirsch · Stefan Harmeling · Rob Fergus · Peyman Milanfar

[ Melia Sol y Nieve: Snow ]

In recent years, computational photography (CP) has emerged as a new field that has put forward a new understanding and thinking of how to image and display our environment. Besides addressing classical imaging problems such as deblurring or denoising by exploiting new insights and methodology in machine learning as well as computer and human vision, CP goes way beyond traditional image processing and photography.

By developing new imaging systems through innovative hardware design, CP not only aims at improving existing imaging techniques but also aims at the development of new ways of perceiving and capturing our surroundings. However, CP is not only about to redefine "everyday" photography but also aims at applications in scientific imaging, such as microscopy, biomedical imaging, and astronomical imaging, and can thus be expected to have a significant impact in many research areas.

After the great success of last year's workshop on CP at NIPS, this workshop proposal tries to accommodate the strong interest in a follow-up workshop expressed by many workshop participants last year. The objectives of this workshop are: (i) to give an introduction to CP, present current approaches and report about the latest developments in this fast-progressing field, (ii) spot and discuss current …

Andreas Krause · Pradeep Ravikumar · Stefanie S Jegelka · Jeffrey A Bilmes

[ Melia Sol y Nieve: Slalom ]

Solving optimization problems with ultimately discrete solutions is becoming increasingly important in machine learning: At the core of statistical machine learning is to infer conclusions from data, and when the variables underlying the data are discrete, both the tasks of inferring the model from data, as well as performing predictions using the estimated model are discrete optimization problems. Many of the resulting optimization problems are NP-hard, and typically, as the problem size increases, standard off-the-shelf optimization procedures become intractable.

Fortunately, most discrete optimization problems that arise in machine learning have specific structure, which can be leveraged in order to develop tractable exact or approximate optimization procedures. For example, consider the case of a discrete graphical model over a set of random variables. For the task of prediction, a key structural object is the "marginal polytope," a convex bounded set characterized by the underlying graph of the graphical model. Properties of this polytope, as well as its approximations, have been successfully used to develop efficient algorithms for inference. For the task of model selection, a key structural object is the discrete graph itself. Another problem structure is sparsity: While estimating a high-dimensional model for regression from a limited amount of data …