Banner
Tutorial Speakers

Tutorial Speakers:


Michael I. Jordan
Department of Electrical Engineering and Computer Science, and Department of Statistics, University of California, Berkeley


Tutorial: 
Nonparametric Bayesian Methods: Dirichlet Processes, Chinese Restaurant Processes and All That

Abstract: 
This tutorial will provide a general introduction to Bayesian nonparametrics, with particular focus on the Dirichlet process and the Chinese restaurant process.  These methods provide ways to take advantage of Bayesian methodology (most notably the ability to define hierarchical models and thereby transfer statistical strength among related inference problems) in a setting in which the complexity of a model is allowed to grow as the number of data points grows.  Dating back to the 1960's, Bayesian nonparametric methods have traditionally found applications in areas such as population genetics and survival analysis, fields which naturally blend basic probabilistic laws with flexible nonparametric modeling assumptions.  Machine learning researchers have begun to explore Bayesian nonparametrics in recent years, a trend which is likely to continue.

Bio:  
Michael Jordan is Professor in the Department of Electrical Engineering and Computer Science and the Department of Statistics at the University of California at Berkeley.  He received his Masters from Arizona State University, and earned his PhD from the University of California, San Diego.  He was a professor at the Massachusetts Institute of Technology for eleven years.  He has published over 250 research articles on topics in computer science, electrical engineering, statistics, molecular biology and cognitive neuroscience. His research in recent years has focused on probabilistic graphical models, kernel machines, nonparametric Bayesian methods and applications to problems in bioinformatics, information retrieval, and signal processing.  He is a recipient of an NSF Presidential Young Investigator Award.  He is a Fellow of the IMS, a Fellow of the IEEE and a Fellow of the AAAI.

Website:
  http://www.cs.berkeley.edu/~jordan/

 

********************


Nancy Kanwisher
Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology

Tutorial:
 Reading Brains:  fMRI Studies of Human Vision

Abstract:
Functional MRI provides new ways to investigate how visual information is processed and represented in the human brain.  Here, we will describe new methods to probe the neural representation of faces, complex objects, and fundamental visual features across the human visual pathway.  Recently developed methods, such as fMRI adaptation and pattern analysis of ensemble activity, now allow researchers to study neural representations at spatial scales that greatly exceed the resolution of fMRI itself.  The first part of the tutorial will describe studies of domain specificity in the human brain, focusing on the functional properties of the fusiform face area (FFA), parahippocampal place area (PPA), and extrastriate body area (EBA). The functional selectivity of these brain areas, their origins and their development, and their role in conscious visual recognition will also be discussed.  The second part of the tutorial will describe pattern analysis methods to measure the ensemble feature selectivity and ensemble object selectivity across the human visual pathway, from primary visual cortex to object areas beyond retinotopic cortex.  The ability to decode a person's conscious perceptual state from cortical activity patterns will also be discussed.  Central themes throughout this tutorial include the relationship between visual selectivity and functional specialization, the information content of cortical signals across different visual areas, and the role of these areas in visual recognition and conscious perception.

Bio: 
Nancy Kanwisher is Ellen Swallow Richards Professor in the Department of Brain and Cognitive Sciences at MIT, and Investigator at MIT's McGovern Institute for Brain Research. She received her B.S. in 1980 and her PhD in 1986, both from MIT.  After teaching for several years at UCLA and then at Harvard, she returned to MIT in 1997.  Kanwisher's research concerns the cognitive and neural mechanisms underlying visual experience, using behavioral methods, functional magnetic resonance imaging (fMRI), and magnetoencephalography (MEG).  Her lab has contributed to the identification and characterization of four new regions in the human brain involved in the visual perception of faces, places, bodies, and objects.  She received a MacArthur Foundation Fellowship in Peace and International Security in 1986, a Troland Research Award from the National Academy of Sciences in 1999, and a MacVicar Faculty Fellow teaching Award from MIT in 2002.  She was elected to the National Academy of Sciences in 2005.

Website:
  http://web.mit.edu/bcs/nklab/

 

***********************


Brian Milch
Department of Electrical Engineering and Computer
Science, University of California, Berkeley

Tutorial:
  First-Order Probabilistic Languages:  Into the Unknown

Abstract:
If humans are to be understood as probabilistic reasoners, it must be the case that they are able to learn new probabilistic models from their experience and to use these models without needing to be "reprogrammed" with new inference algorithms.  To duplicate these abilities in a computer, we need a formal representation for probability models: this serves as the output of structure learning algorithms and an input to inference algorithms.  Graphical models are one such representation. They are useful for modeling the attributes of a fixed set of objects with fixed relations among them.  But for scenarios where the number of objects, the relations among objects, and the mapping from observations to underlying objects may all be unknown, graphical models with fixed sets of nodes and edges are no longer appropriate.

This tutorial will begin by surveying recent work on first-order probabilistic languages (FOPLs), which, like first-order logic, explicitly represent objects and the relations among them.  We will present algorithms for learning the structure of such models and for speeding up inference by generalizing across objects.  We will then discuss how to represent uncertainty about what objects exist, what relations hold among them, and whether two observations correspond to the same object.  These higher levels of uncertainty require a representation language with more sophisticated semantics, and motivate new inference algorithms.  Our discussion will be illustrated with examples from social network analysis, textual co-reference resolution, and sensor data association.

Bio:  Brian Milch is a Ph.D. candidate in computer science at the University of California at Berkeley.  He received his B.S. with honors in Symbolic Systems from Stanford University, where he did artificial intelligence research with Prof. Daphne Koller.  He then spent a year as a research engineer at Google before entering the Berkeley Ph.D. program in 2001.  His thesis research, with Prof. Stuart Russell, is on representation and inference for models that combine probability and first-order logic.  He is the recipient of an NSF Graduate Research Fellowship.

Website:
  http://www.cs.berkeley.edu/~milch

 

***********************


Bruno Olshausen
Department of Neurobiology, Physiology and Behavior and Center for Neuroscience, UC Davis; Redwood Neuroscience Institute

Tutorial:
 Natural Scene Statistics and Biological Vision:  From Pixels to Percepts

Abstract:
 Our percepts of the world are clearly *inferred*, rather than being derived directly from the available data.  This means that our brains must be endowed with powerful inferential machinery -- i.e., probabilistic models -- for combining incoming sensory information together with prior knowledge of the natural environment in order to infer what's "out there."  This tutorial will focus on recent efforts to characterize the statistical structure of the natural environment and its relation to neural representations in the visual system.  Many aspects of early visual processing -- for example, contrast sensitivity, adaptation, and receptive field properties -- may be understood in terms of efficient coding strategies adapted to the spatio-temporal structure contained in image pixels.  However, one of the great challenges that lies ahead is to extend this approach to learn about aspects of intermediate-level representations, such as form invariance or surface representation, and some current efforts (and future prospects) in this direction will be discussed.  The study of natural scene statistics has also encouraged their use as stimuli in psychophysical and neurophysiological experiments, and the results of these studies are beginning to teach us new lessons about visual system function at all stages of processing.

Bio:
Bruno Olshausen received B.S. and M.S. degrees in Electrical Engineering from Stanford University, and a Ph.D. in Computation and Neural Systems from the California Institute of Technology.  He is currently Associate Professor of Neurobiology, Physiology & Behavior at UC Davis, and Principal Investigator at the Redwood Neuroscience Institute in Menlo Park.  He recently chaired the 2004 Gordon Research Conference on "Sensory coding and the natural environment."

Website:
  http://redwood.ucdavis.edu/bruno/

 

***********************


Stuart Russell
Department of Electrical Engineering and Computer
Science, University of California, Berkeley

Tutorial:
  First-Order Probabilistic Languages:  Into the Unknown

Abstract:
  If humans are to be understood as probabilistic reasoners, it must be the case that they are able to learn new probabilistic models from their experience and to use these models without needing to be "reprogrammed" with new inference algorithms.  To duplicate these abilities in a computer, we need a formal representation for probability models: this serves as the output of structure learning algorithms and an input to inference algorithms.  Graphical models are one such representation. They are useful for modeling the attributes of a fixed set of objects with fixed relations among them.  But for scenarios where the number of objects, the relations among objects, and the mapping from observations to underlying objects may all be unknown, graphical models with fixed sets of nodes and edges are no longer appropriate.

This tutorial will begin by surveying recent work on first-order probabilistic languages (FOPLs), which, like first-order logic, explicitly represent objects and the relations among them.  We will present algorithms for learning the structure of such models and for speeding up inference by generalizing across objects.  We will then discuss how to represent uncertainty about what objects exist, what relations hold among them, and whether two observations correspond to the same object.  These higher levels of uncertainty require a representation language with more sophisticated semantics, and motivate new inference algorithms.  Our discussion will be illustrated with examples from social network analysis, textual co-reference resolution, and sensor data association.

Bio:
  Stuart Russell received his B.A. with first-class honors in physics from Oxford University in 1982 and his Ph.D. in computer science from Stanford in 1986.  He then joined the faculty of the University of California at Berkeley, where he is a professor of computer science, director of the Center for Intelligent Systems, and holder of the Smith--Zadeh Chair in Engineering.  In 1990, he received the Presidential Young Investigator Award of the National Science Foundation, and in 1995 he was cowinner of the Computers and Thought Award. He was a 1996 Miller Professor of the University of California and was appointed to a Chancellor's Professorship in 2000. In 1998, he gave the Forsythe Memorial Lectures at Stanford University. He is a Fellow and former Executive Council member of the American Association for Artificial Intelligence, a Fellow of the Association for Computing Machinery, and Secretary of the International Machine Learning Society.  He has published over 100 papers on a wide range of topics in artificial intelligence. His books include "The Use of Knowledge in Analogy and Induction" (Pitman, 1989), "Do the Right Thing: Studies in Limited Rationality" (with Eric Wefald, MIT Press, 1991), and "Artificial Intelligence: A Modern Approach" (with Peter Norvig, Prentice Hall, 1995, 2003).

Website:
  http://www.cs.berkeley.edu/~russell

 

***********************


Lawrence Saul
Department of Computer and Information Science, University of Pennsylvania

Tutorial:
 Spectral Methods for Dimensionality Reduction

Abstract:
 How can we detect low dimensional structure in high dimensional data?  If the data is mainly confined to a low dimensional subspace, then simple linear methods can be used to discover the subspace and estimate its dimensionality.  More generally, though, if the data lies on (or near) a low dimensional submanifold, then its structure may be highly nonlinear, and linear methods are bound to fail. 

Spectral methods have recently emerged as a powerful tool for nonlinear dimensionality reduction and manifold learning. These methods are able to reveal low dimensional structure in high dimensional data from the top or bottom eigenvectors of specially constructed matrices.  The matrices are constructed from sparse weighted graphs whose vertices represent input patterns and whose edges indicate neighborhood relations. The main computations for manifold learning are based on highly tractable optimizations, such as shortest path problems, least squares fits, semidefinite programming, and matrix diagonalization.

In this tutorial, I will provide an overview of unsupervised learning algorithms that can be viewed as spectral methods for linear and nonlinear dimensionality reduction.

Bio:
 Lawrence Saul received his A.B. in Physics from Harvard (1990) and his Ph.D. in Physics from M.I.T. (1994).  He stayed at M.I.T. for two more years as a postdoctoral fellow in the Center for Biological and Computational Learning, then joined the Speech and Image Processing Center of AT&T Labs in Florham Park, NJ.  In 1999, the MIT Technology Review recognized him as one of 100 top young innovators. He has been an Assistant Professor at the University of Pennsylvania since January 2002.  More recently, he served as Program Chair and General Chair for the 2003-2004 conferences on Neural Information Processing Systems.  He is currently serving on the Editorial Board for the Journal of Machine Learning Research.

Website
:  http://www.cis.upenn.edu/~lsaul/

 

***********************

 

Satinder Singh
Department of Computer Science and Engineering, University of Michigan

Tutorial:  Reinforcement Learning in Artificial Intelligence:  Learning, Planning and Knowledge Representation

Abstract: Over the last decade and more, there has been rapid theoretical and empirical progress in reinforcement learning (RL) using the well- established formalisms of Markov decision processes (MDPs) and partially observable MDPs or POMDPs.  In the first half of the tutorial, I will summarize the available theory of learning and planning in RL including the state of the art approaches to solving the temporal credit assignment problem and the function approximation problem.

In the second half of the tutorial, I will focus on the recent surge of interest in RL on knowledge representation. This new emphasis in RL is motivated by the desire to build more robust AI systems/agents than were hitherto possible. I will describe the resulting research that involves a foundational rethinking of the elemental (PO)MDP-like notions of state, action and reward that have served RL so well. In particular, I will present the ideas and algorithms behind Predictive State Representations or PSRs, TD-nets, options and other notions of flexible actions, and intrinsic rewards. I will conclude this half by arguing that, taken together, these RL ideas on knowledge representation constitute real progress in building knowledge-rich AI agents.

Bio: Satinder Singh is an Associate Professor of Electrical Engineering and Computer Science at the University of Michigan, Ann Arbor. Prior to this he  was a principal member of the technical staff in the AI group at AT&T labs, and earlier still he was an Assistant Professor of Computer Science at the University of Colorado, Boulder. He has published extensively in the field of reinforcement learning and more recently has turned to computational game theory to understand multiagent systems, and to economic mechanism design to understand the role of incentives in designing multiagent systems.

Website:  http://www.eecs.umich.edu/~baveja/NIPS05RLTutorial

 

***********************


Frank Tong
Department of Psychology, Vanderbilt University

Tutorial:
  Reading Brains:  fMRI Studies of Human Vision

Abstract:
  Functional MRI provides new ways to investigate how visual information is processed and represented in the human brain.  Here, we will describe new methods to probe the neural representation of faces, complex objects, and fundamental visual features across the human visual pathway.  Recently developed methods, such as fMRI adaptation and pattern analysis of ensemble activity, now allow researchers to study neural representations at spatial scales that greatly exceed the resolution of fMRI itself.  The first part of the tutorial will describe studies of domain specificity in the human brain, focusing on the functional properties of the fusiform face area (FFA), parahippocampal place area (PPA), and extrastriate body area (EBA). The functional selectivity of these brain areas, their origins and their development, and their role in conscious visual recognition will also be discussed.  The second part of the tutorial will describe pattern analysis methods to measure the ensemble feature selectivity and ensemble object selectivity across the human visual pathway, from primary visual cortex to object areas beyond retinotopic cortex.  The ability to decode a person's conscious perceptual state from cortical activity patterns will also be discussed.  Central themes throughout this tutorial include the relationship between visual selectivity and functional specialization, the information content of cortical signals across different visual areas, and the role of these areas in visual recognition and conscious perception.

Bio:
  Frank Tong is an Assistant Professor of Psychology at Vanderbilt University.  He received his Ph.D. from Harvard University in 1999.  He conducted postdoctoral research on the neural basis of binocular rivalry and visual awareness as a McDonnell-Pew fellow at UCLA from 1999-2000, before joining the faculty at Princeton University as Robert K. Root Assistant Professor of Psychology.  He joined the faculty at Vanderbilt University in 2004, where he continues to investigate the neural bases of visual perception, object recognition, attention, and awareness.  His research is supported by the National Institutes of Health.  Research contributions include characterizing the role of primary visual cortex in binocular rivalry and conscious perception, and developing new methods for human neural decoding of orientation perception and subjective visual states.

Website:
  http://www.psy.vanderbilt.edu/faculty/tongf/