Skip to yearly menu bar Skip to main content



Workshops
John Langford · Alina Beygelzimer

[ Hilton: Sutcliffe A ]

This workshop is about how to design learning problems. The task of designing learning problems can be understood as roughly parallel to the mechanism design problem within economics and game theory. Several recent examples of learning problem design that have appeared include: (1) Converting otherwise-unsupervised problems into supervised problems. (2) The use of algorithm created ancillary prediction problems for improved representation and predictive performance. (3) The method of reduction between learning tasks. This area is new and not entirely defined---it's our goal to bring together people interested in the topic, define what we do and don't understand, and attempt to define the principles of learning problem design. We welcome participation---email jl@hunch.net and beygel@gmail.com if interested.

Ryan Canolty · Kai J Miller

[ Westin: Emerald A ]

The "Large Scale Brain Dynamics" workshop is focused on the dynamics of electrical activity of the brain on the 5-10 mm scale, with an emphasis on multielectrode electrocorticographic (ECoG) recording. Central questions include: What are the relevant aspects of the large scale cortical signal for feature extraction? Given current clinical technology, what constraints are there on the large scale potential measurements at the brain surface? (That is, is there a cutoff frequency for signal extraction? How reproducible are phenomena? What is the true temporal, spatial, and spectral fidelity?) If two cortical areas are communicating, what sort of ECoG signal features would be present? What is the best way to track variable-delay activity in multiple brain regions associated with a complex task? The primary goal of the workshop is to provide a forum to identify and discuss the key issues of the field.

Archana Ganapathi · Sumit Basu · Fei Sha · Emre Kiciman

[ Hilton: Sutcliffe B ]

In the last few years, there has been a budding interaction between machine learning and computer systems researchers. In particular, statistical machine learning techniques have found a wide range of successful applications in many core systems areas, from designing computer microarchitectures and analyzing network traffic patterns to managing power consumption in data centers and beyond. However, connecting these two areas has its challenges: while systems problems are replete with mountains of data and hidden variables, complex sets of interacting systems, and other exciting properties, labels can be hard to come by, and the measure of success can be hard to define. Furthermore, systems problems often require much more than high classification accuracy - the answers from the algorithms need to be both justifiable and actionable. Dedicated workshops in systems conferences have emerged (for example, SysML 2006 and SysML 2007) to address this area, though they have had little visibility to the machine learning community. A primary goal of this workshop is thus to expose these new research opportunities in systems areas to machine learning researchers, in the hopes of encouraging deeper and broader synergy between the two communities. During the workshop, through various planned overviews, invited talks, poster sessions, group …

David R Hardoon · Eduardo Reck-Miranda · John Shawe-Taylor

[ Westin: Alpine (D-E) ]

Music is one of the most widespread of human cultural activities, existing in some form in all cultures throughout the world. The definition of music as organised sound is widely accepted today but a naïve interpretation of this definition may suggest the notion that music exists widely in the animal kingdom, from the rasping of crickets' legs to the songs of the nightingale. However, only in the case of humans does music appear to be surplus to any obvious biological purpose, while at the same time being a strongly learned phenomenon and involving significant higher order cognitive processing rather than eliciting simple hardwired responses.

A two day workshop will take place at NIPS 07 (Vancouver, Canada) and will span topics from signal processing and musical structure to the cognition of music and sound. In the first day the workshop will provide a forum for cutting edge research addressing the fundamental challenges of modeling the structure of music and analysing its effect on the brain. It will also provide a venue for interaction between the machine learning and the neuroscience/brain imaging communities to discuss the broader questions related to modeling the dynamics of brain activity. During the second day the workshop …

Jan Peters · Marc Toussaint

[ Hilton: Black Tusk ]

Creating autonomous robots that can assist humans in situations of daily life is a great challenge for machine learning. While this aim has been a long standing vision of robotics, artificial intelligence, and the cognitive sciences, we have yet to achieve the first step of creating robots that can accomplish a multitude of different tasks, triggered by environmental context or higher level instruction. Despite the wide range of machine learning problems encountered in robotics, the main bottleneck towards this goal has been a lack of interaction between the core robotics and the machine learning communities. To date, many roboticists still discard machine learning approaches as generally inapplicable or inferior to classical, hand-crafted solutions. Similarly, machine learning researchers do not yet acknowledge that robotics can play the same role for machine learning which for instance physics had for mathematics: as a major application as well as a driving force for new ideas, algorithms and approaches.

Robotics challenges can inspire and motivate new Machine Learning research as well as being an interesting field of application of standard ML techniques. Inversely, with the current rise of real, physical humanoid robots in robotics research labs around the globe, the need for machine learning in …

Gal Chechik · Christina Leslie · Quaid Morris · William S Noble · Gunnar Rätsch · Koji Tsuda

[ Westin: Nordic ]

The field of computational biology has seen dramatic growth over the past few years, both in terms of new available data, new scientific questions, and new challenges for learning and inference. In particular, biological data is often relationally structured and highly diverse, well-suited to approaches that combine multiple weak evidence from heterogeneous sources. These data may include sequenced genomes of a variety of organisms, gene expression data from multiple technologies, protein expression data, protein sequence and 3D structural data, protein interactions, gene ontology and pathway databases,genetic variation data (such as SNPs), and an enormous amount of textual data in the biological and medical literature. New types of scientific and clinical problems require the development of novel supervised and unsupervised learning methods that can use these growing resources. The goal of this workshop is to present emerging problems and machine learning techniques in computational biology. We plan to have several speakers from the biology/bioinformatics community who will present current research problems in bioinformatics, and we invite contributed talks on novel learning approaches in computational biology. We encourage contributions describing either progress on new bioinformatics problems or work on established problems using methods that are substantially different from standard approaches. Kernel methods, …

Samy Bengio · Corinna Cortes · Dennis DeCoste · Francois Fleuret · Ramesh Natarajan · Edwin Pednault · Dan Pelleg · Elad Yom-Tov

[ Hilton: Mt. Currie S ]

The ever increasing size of available data to be processed by machine learning algorithms has yielded several approaches, from online algorithms to parallel and distributed computing on multi-node clusters. Nevertheless, it is not clear how modern machine learning approaches can either cope with such parallel machineries or take into account strong constraints regarding the available time to handle training and/or test examples. This workshop will explore two alternatives: (1) modern machine learning approaches that can handle real time processing at train and/or at test time, under strict computational constraints (when the flow of incoming data is continuous and needs to be handled), and (2) modern machine learning approaches that can take advantage of new commodity hardware such as multicore, GPUs, and fast networks. This two-day workshop aims to set the agenda for future advancements by fostering a discussion of new ideas and methods and by demonstrating the potential uses of readily-available solutions. It will bring together both researchers and practitioners to offer their views and experience in applying machine learning to large scale learning.

Michael Aupetit · Frederic Chazal · Gilles Gasso · David Cohen-Steiner · pierre gaillard

[ Westin: Glacier ]

There is a growing interest in Machine Learning, in applying geometrical and topological tools to high-dimensional data analysis and processing. Considering a finite set of points in a high-dimensional space, the approaches developed in the field of Topology Learning intend to learn, explore and exploit the topology of the shapes (topological invariants such as the intrinsic dimension or the Betti numbers), manifolds or not, from which these points are supposed to be drawn. Applications likely to benefit from these topological characteristics have been identified in the field of Exploratory Data Analysis, Pattern Recognition, Process Control, Semi-Supervised Learning, Manifold Learning and Clustering. However it appears that the integration in the Machine Learning and Statistics frameworks of the problems we are faced with in Topology Learning, is still in its infancy. So we wish this workshop to ignite cross-fertilization between Machine Learning, Computational Geometry and Topology, likely to benefit to all of them by leading to new approaches, deeper understanding, and stronger theoretical results of the problems carried by Topology Learning.

Please check the Workshop website for schedule changes.

Matthias Seeger · David Barber · Neil D Lawrence · Onno Zoeter

[ Hilton: Diamond Head ]

Deterministic (variational) techniques are used all over Machine Learning to approximate Bayesian inference for continuous- and hybrid-variable problems. In contrast to discrete variable approximations, surprisingly little is known about convergence, quality of approximation, numerical stability, specific biases, and differential strengths and weaknesses of known methods. In this workshop, we aim to highlight important problems and to gather ideas of how to address them. The target audience are practitioners, providing insight into and analysis of problems with certain methods or comparative studies of several methods, as well as theoreticians interested in characterizing the hardness of continuous distributions or proving relevant properties of an established method. We especially welcome contributions from Statistics (Markov Chain Monte Carlo), Information Geometry, Optimal Filtering, or other related fields if they make an effort of bridging the gap towards variational techniques.

Richard Turner · Pietro Berkes · Maneesh Sahani

[ Westin: Alpine A-C ]

Models for simple cells are now well established; how can the field progress beyond them? The goal of the proposed workshop is to assess the success of computational models for cortical processing based on probabilistic models and to suggest new directions for future research. Issues to be discussed include: How can we bridge the gap between objects and Gabors? What experimental results should we attempt to model? Are current comparisons with physiology at all relevant? What experimental results would we like to have? What aspects of visual input are relevant for modeling (eye movements, head movements, color etc.)? How relevant is time? We will review these important questions through both theoretical and experimental lenses. We have invited a panel of experimentalists that will drive the discussion, with the hope of inspiring new research directions of greater general neuroscientific interest. We encourage poster submissions from participants; see web site for details.

Denny Zhou · Olivier Chapelle · Thorsten Joachims · Thomas Hofmann

[ Hilton: Mt. Currie N. ]

This workshop is intended for people who are interested in both machine learning and web search. With its tens of billions of unstructured and dynamic pages and its increasing number of users, the World Wide Web poses new great challenges to the existing machine learning algorithms, and at the same time it also fuels the rapid development of new machine learning techniques. This workshop aims at bringing machine learning and web search people together to discuss the fundamental issues in web search from relevance ranking and web spam detection to online advertising. The topics will be mainly focused on new web page ranking algorithms and also online advertising related issues, like click-rate prediction and content matching.

Yael Niv · Matthew Botvinick · Andrew G Barto

[ Westin: Callaghan ]

The aim of this workshop is to discuss current ideas from computer science, psychology and neuroscience regarding learning and control of hierarchically structured behavior. Psychological research has long emphasized that human behavior is hierarchically structured. Indeed, a hierarchical organization of human behavior that matches the hierarchical structure of real-world problems has been the focus of much empirical and theoretical research, and has played a pivotal role in research on organized, goal-directed behavior. Behavioral hierarchy has been of longstanding interest within neuroscience as well, where it has been considered to relate closely to prefrontal cortical function. The prefrontal cortex, which, with its high cognitive functions, remains the most poorly understood area of the brain, has been repeatedly implicated in supporting and executing hierarchical learning and control. In yet a third field, recent developments within machine learning have led to the emergence of 'hierarchical reinforcement learning'. This line of research has begun investigating in depth how optimal control can learn, and make use of, hierarchical structures, specifically, how hierarchies of skills (also termed options, macros or temporally abstract actions) could by learned and utilized optimally. The workshop will bring together front-line researchers from each of these fields, with the aim of gleaning …

[ Westin Hotel, Emerald C Ballroom ]

IBM Research invites all students attending the NIPS workshops to enjoy appetizers and refreshments, and to mingle with fellow students and several IBM scientists, who will be on hand to give poster presentations and a slide presentation discussing ongoing research at IBM in the areas of machine learning and other research topics of interest to NIPS.

Schedule: 7:00-8:00pm: Posters/Socializing 8:00-9:00pm: "Overview of Machine Learning research at IBM"

Attendance limited to students only.

Gal Chechik · Christina Leslie · Quaid Morris · William S Noble · Gunnar Rätsch · Koji Tsuda

[ Westin: Nordic ]

The field of computational biology has seen dramatic growth over the past few years, both in terms of new available data, new scientific questions, and new challenges for learning and inference. In particular, biological data is often relationally structured and highly diverse, well-suited to approaches that combine multiple weak evidence from heterogeneous sources. These data may include sequenced genomes of a variety of organisms, gene expression data from multiple technologies, protein expression data, protein sequence and 3D structural data, protein interactions, gene ontology and pathway databases,genetic variation data (such as SNPs), and an enormous amount of textual data in the biological and medical literature. New types of scientific and clinical problems require the development of novel supervised and unsupervised learning methods that can use these growing resources. The goal of this workshop is to present emerging problems and machine learning techniques in computational biology. We plan to have several speakers from the biology/bioinformatics community who will present current research problems in bioinformatics, and we invite contributed talks on novel learning approaches in computational biology. We encourage contributions describing either progress on new bioinformatics problems or work on established problems using methods that are substantially different from standard approaches. Kernel methods, …

Kevin Murphy · Lise Getoor · Eric Xing · Raphael Gottardo

[ Hilton: Cheakamus ]

The purpose of the workshop is to bring together people from different disciplines - computer science, statistics, biology, physics, social science, etc - to discuss foundational issues in the modeling of network and relational data. In particular, we hope to discuss various open research issues, such as (1) How to represent graphs at varying levels of abstraction, whose topology is potentially condition-specific and time-varying (2) How to combine techniques from the graphical model structure learning community with techniques from the statistical network modeling community (3) How to integrate relational data with other kinds of data (e.g., gene expression or text data)

Samy Bengio · Corinna Cortes · Dennis DeCoste · Francois Fleuret · Ramesh Natarajan · Edwin Pednault · Dan Pelleg · Elad Yom-Tov

[ Hilton: Mt. Currie S ]

The ever increasing size of available data to be processed by machine learning algorithms has yielded several approaches, from online algorithms to parallel and distributed computing on multi-node clusters. Nevertheless, it is not clear how modern machine learning approaches can either cope with such parallel machineries or take into account strong constraints regarding the available time to handle training and/or test examples. This workshop will explore two alternatives: (1) modern machine learning approaches that can handle real time processing at train and/or at test time, under strict computational constraints (when the flow of incoming data is continuous and needs to be handled), and (2) modern machine learning approaches that can take advantage of new commodity hardware such as multicore, GPUs, and fast networks. This two-day workshop aims to set the agenda for future advancements by fostering a discussion of new ideas and methods and by demonstrating the potential uses of readily-available solutions. It will bring together both researchers and practitioners to offer their views and experience in applying machine learning to large scale learning.

Ryan Canolty · Kai J Miller

[ Westin: Emerald A ]

The "Large Scale Brain Dynamics" workshop is focused on the dynamics of electrical activity of the brain on the 5-10 mm scale, with an emphasis on multielectrode electrocorticographic (ECoG) recording. Central questions include: What are the relevant aspects of the large scale cortical signal for feature extraction? Given current clinical technology, what constraints are there on the large scale potential measurements at the brain surface? (That is, is there a cutoff frequency for signal extraction? How reproducible are phenomena? What is the true temporal, spatial, and spectral fidelity?) If two cortical areas are communicating, what sort of ECoG signal features would be present? What is the best way to track variable-delay activity in multiple brain regions associated with a complex task? The primary goal of the workshop is to provide a forum to identify and discuss the key issues of the field.

Virginia Savova · Josh Tenenbaum · Leslie Kaelbling · Alan Yuille

[ Westin: Alpine A-C ]

The human ability to acquire a visual concept from a few examples, and to recognize instances of that concept in the context of a complex scene, poses a central challenge to the fields of computer vision, cognitive science, and machine learning. Representing visual objects and scenes as the human mind does is likely to require structural sophistication, something akin to a grammar for image parsing, with multiple levels of hierarchy and abstraction, rather than the "flat" feature vectors which are standard in most statistical pattern recognition. Grammar-based approaches to vision have been slow to develop, largely due to the absence of effective methods for learning and inference under uncertainty. However, recent advances in machine learning and statistical models for natural language have inspired a renewed interest in structural representations of visual objects, categories, and scenes. The result is a new and emerging body of research in computational visual cognition that combines sophisticated probabilistic methods for learning and inference with classical grammar-based approaches to representation. The goal of our workshop is to explore these new directions, in the context of several interdisciplinary connections that converge distinctively at NIPS. We will focus on these challenges: How can we learn better probabilistic grammars …

Kenji Fukumizu · Arthur Gretton · Alexander Smola

[ Hilton: Diamond Head ]

When dealing with distributions it is in general infeasible to estimate them explicitly in high dimensional settings, since the associated learning rates are quite slow. On the other hand, a great variety of applications in machine learning and computer science require distribution estimation and/or comparison. Examples include testing for homogeneity (the "two-sample problem"), independence, and conditional independence, where the last two can be used to infer causality; data set squashing / data sketching / data anonymisation; domain adaptation (the transfer of knowledge learned on one domain to solving problems on another, related domain) and the related problem of covariate shift; message passing in graphical models (EP and related algorithms); compressed sensing; and links between divergence measures and loss functions. The purpose of this workshop is to bring together statisticians, machine learning researchers, and computer scientists working on representations of distributions for various inference and testing problems, to discuss the compromises necessary in obtaining useful results from finite data. In particular, what are the capabilities and weaknesses of different distribution estimates and comparison strategies, and what negative results apply?

Hendrik Purwins · Xavier Serra · Klaus H Obermayer

[ Westin: Alpine D-E ]

A two day workshop will span topics from signal processing and musical structure to the cognition of music and sound. In the first day the workshop will provide a forum for cutting edge research addressing the fundamental challenges of modeling the structure of music and analysing its effect on the brain. It will also be a venue for interaction between the machine learning and the neuroscience/brain imaging communities to discuss the broader questions related to modeling the dynamics of brain activity. During the second day the workshop will focus on the modeling of sound, music perception and cognition. These have the potential to provide, with the crucial role of machine learning, a break-through in various areas of music technology, in particular: Music Information Retrieval (MIR), expressive music synthesis, interactive music making, and sound design.

Archana Ganapathi · Sumit Basu · Fei Sha · Emre Kiciman

[ Hilton: Sutcliffe B ]

In the last few years, there has been a budding interaction between machine learning and computer systems researchers. In particular, statistical machine learning techniques have found a wide range of successful applications in many core systems areas, from designing computer microarchitectures and analyzing network traffic patterns to managing power consumption in data centers and beyond. However, connecting these two areas has its challenges: while systems problems are replete with mountains of data and hidden variables, complex sets of interacting systems, and other exciting properties, labels can be hard to come by, and the measure of success can be hard to define. Furthermore, systems problems often require much more than high classification accuracy - the answers from the algorithms need to be both justifiable and actionable. Dedicated workshops in systems conferences have emerged (for example, SysML 2006 and SysML 2007) to address this area, though they have had little visibility to the machine learning community. A primary goal of this workshop is thus to expose these new research opportunities in systems areas to machine learning researchers, in the hopes of encouraging deeper and broader synergy between the two communities. During the workshop, through various planned overviews, invited talks, poster sessions, group …

Jillian H Fecteau · Dirk B Walther · Vidhya Navalpakkam · John K Tsotsos

[ Westin: Glacier ]

Visual attention is a basic cognitive process; it reflects the ability to select and process one object or small region of space in the visual scene. Albeit basic, the concept of visual attention has garnered much interest and many different computational theories have been proposed to explain its neural origins. In this workshop, we will celebrate the strengths and examine the weaknesses of three different theories of attention. Following a brief tutorial lecture, we will debate the relative merits the Biased Competition Model of Attention, the Salience Map of Attention, and the Selective Tuning Model of Attention. Through this debating process, we will consider the strengths and weaknesses of alternative models as well. Finally, we will debate the computational principles that underlie these models, such as the relevance of information theory, signal detection theory, and optimality principles in capturing the influence of visual attention in neural activity and in behavior. Indeed, how might dynamical systems be designed to model attentive behavior? The unorthodox organization of this workshop will encourage candid discussions -- the invited speakers will discuss the merits of one of these models and/or the weaknesses of another. We hope you join us in this venture.

Sebastian Thrun · Chris Urmson · Raul Rojas · William Uther

[ Hilton: Black Tusk ]

Richard Lippman · Pavel Laskov

[ Hilton: Sutcliffe A ]

Computer and network security has become an important research area due to the alarming recent increase in hacker activity motivated by profit and both ideological and national conflicts. Increases in spam, botnets, viruses, malware, key loggers, software vulnerabilities, zero-day exploits and other threats contribute to growing concerns about security. In the past few years, many researchers have begun to apply machine learning techniques to these and other security problems. Security, however, is a difficult area because adversaries actively manipulate training data and vary attack techniques to defeat new systems. A main purpose of this workshop is examine adversarial machine learning problems across different security applications to see if there are common problems, effective solutions, and theoretical results to guide future research, and to determine if machine learning can indeed work well in adversarial environments. Another purpose is to initiate a dialog between computer security and machine learning researchers already working on various security applications, and to draw wider attention to computer security problems in the NIPS community.

Joaquin Quiñonero-Candela · Thore K Graepel · Ralf Herbrich

[ Hilton: Mt. Currie N ]

Computer games sales are three time larger than industry software sales, and on par with Hollywood box office sales. Modern computer games are often based on extremely complex simulations of the real world and constitute one of the very few real fields of application for artificial intelligence encountered in everyday live. Surprisingly, machine learning methods are not present in the vast majority of computer games. There have been a few recent and notable successes in turn-based two-player, discrete action space games such as Backgammon, Checkers, Chess and Poker. However, these successes are in stark contrast to the difficulties still encountered in the majority of computer games, which typically involve more than two agents choosing from a continuum of actions in complex artificial environments. Typical game AI is still largely built around fixed systems of rules that often result in implausible or predictable behaviour and poor user experience. The purpose of this workshop is to involve the NIPS community in the exciting challenges that games - ranging from traditional table top games to cutting-edge console and PC games - offer to machine learning.

Yael Niv · Matthew Botvinick · Andrew G Barto

[ Westin: Callaghan ]

The aim of this workshop is to discuss current ideas from computer science, psychology and neuroscience regarding learning and control of hierarchically structured behavior. Psychological research has long emphasized that human behavior is hierarchically structured. Indeed, a hierarchical organization of human behavior that matches the hierarchical structure of real-world problems has been the focus of much empirical and theoretical research, and has played a pivotal role in research on organized, goal-directed behavior. Behavioral hierarchy has been of longstanding interest within neuroscience as well, where it has been considered to relate closely to prefrontal cortical function. The prefrontal cortex, which, with its high cognitive functions, remains the most poorly understood area of the brain, has been repeatedly implicated in supporting and executing hierarchical learning and control. In yet a third field, recent developments within machine learning have led to the emergence of 'hierarchical reinforcement learning'. This line of research has begun investigating in depth how optimal control can learn, and make use of, hierarchical structures, specifically, how hierarchies of skills (also termed options, macros or temporally abstract actions) could by learned and utilized optimally. The workshop will bring together front-line researchers from each of these fields, with the aim of gleaning …