Banner
Neural Information Processing Systems 2002 (NIPS*2002) Invited Talks Page
Invited Talks Neural Information Processing Systems: Natural and Synthetic

Abstracts and bios for the NIPS*2002 invited talks are here. The invited talks will be interspersed throughout the conference.

Main Papers and Past
Program Tutorials Invited Talks Workshops
NEW
Online Preproceedings
NEW
Awards
Registration and Financial/Travel Support
Invitation Letters Volunteering
Hotels and Local Transportation
For Authors and Presenters
  Ants at Work Deborah M. Gordon, Stanford University

Monday
Dinner keynote

An ant colony performs a variety of tasks such as foraging and nest construction. Task allocation is the process that adjusts the numbers of ants performing each task, in a way appropriate to current conditions and colony needs. Task allocation operates without central control. Individuals use local information as cues in task decisions. In harvester ants, an ant uses the rate of brief contact with other ants as a cue. This creates a network of interactions which determines the dynamics of task allocation.

Deborah M. Gordon is an Associate Professor in the department of Biological Sciences at Stanford University. Her research is on the social organization and ecology of ant colonies. She has conducted a long-term study of a population of harvester ant colonies in the southwest US desert, which is the subject of her book, Ants at Work. Her laboratory also studies the invasive Argentine ant in northern California.
  Neural Basis of Visual Pattern Discrimination David J. Heeger, New York University

Tuesday
8:30-9:20am

Recent advances in magnetic resonance imaging have made it possible for us to measure neuronal activity in the human brain while the subject is awake and performing any of a variety of tasks. Armed with this new tool, called functional magnetic resonance imaging (fMRI), we are in the midst of a revolution in neuroscience. In vision research, in particular, never before have we had the opportunity to link "what you see" with "what your brain is doing". I will review the anatomy and physiology of the visual pathways in the human brain, and describe a series of experiments and computational models pertaining to visual pattern discrimination.

David J. Heeger is a Professor of Psychology and Neural Science, at New York University. He received his Ph.D. in computer science from the University of Pennsylvania. He was a postdoctoral fellow at MIT a research scientist at the NASA-Ames Research Center, and an Associate Professor at Stanford University before coming to NYU. His research spans an interdisciplinary cross-section of engineering, psychology, and neuroscience, the current focus of which is to use functional magnetic resonance imaging (fMRI) to quantitatively investigate the relationship between brain and behavior. He was awarded the David Marr Prize in computer vision in 1987, an Alfred P. Sloan Research Fellowship in neuroscience in 1994, and the Troland Award in psychology from the National Academy of Sciences in 2002.
  The Emergence of Visual Categories - A Computational Perspective Pietro Perona, California Institute of Technology

Wednesday
8:30-9:20am

When we are born we do not know about sailing boats, frogs, cell-phones and wheelbarrows. By the time we reach school age we can easily recognize these categories of objects and many more using our visual system; by some estimates, we learn around 10 new categories per day with minimal supervision during the first few years of our lives. How can this happen? I will outline a computational approach to the problem of representing the visual properties of object categories, and of learning such models without supervision from cluttered images. Both static images of objects and dynamic displays such as the ones generated by human activity are handled by the theory. Its properties will be exemplified with experiments on a variety of categories.

Pietro Perona's research focusses on visual psychophysics, modeling of biological vision and computational vision. He has worked on image segmentation, texture, motion, 3D shape, object recognition and biological motion. Perona is Professor of Electrical Engineering and of Computation and Neural Systems at the California Institute of Technology. He is the Director of the NSF Engineering Research Center on Neuromorphic Systems Engineering.
  Decisions, Uncertainty and the Brain: Neuroeconomics Paul Glimcher, New York University

Wednesday
4:00-4:50pm

Recent studies of neuronal activity in awake-behaving primates has begun to reveal the computational architecture for primate decision making. Perhaps unsurprisingly, these experimental studies indicate that neurons of the parietal cortex explicitly encode classical decision variables like Bayesian prior probability and expected utility. The most recent of these studies have even employed game theoretic methodologies to examine neuronal computation during volitional decision making. Glimcher's presentation will review many of these findings and will suggest that a mathematical framework rooted in modern economic theory will provide the critical computational tool required for understanding the neural basis of human and animal decision making.

Paul Glimcher earned his Ph.D. in Neuroscience from the University of Pennsylvania in 1989. From 1990-1993 he was a research associate with David Sparks, also at the University of Pennsylvania, studying the neurobiological mechanisms that control orienting eye movements. Since 1994 he has been at New York University's Center for Neural Science where he is now an Associate Professor. He has been named a Fellow of the Whitehall Foundation, The McKnight Foundation, and The Klingenstein Foundation and has been an Investigator of the National Eye Institute for the past eight years. His forthcoming book, Decisions Uncertainty and The Brain, will be released by MIT Press in January of 2003.
  Information in Sensor Networks Hugh Durrant-Whyte, Australian Centre for Field Robotics, The University of Sydney

Thursday
8:30-9:20am

Building integrated networks of sensors and actuators continues to be a holy grail of systems engineers in a broad range of application domains. Network-centric architectures offer potential benefits of modularity, robustness and scalability in complex "systems of systems". This talk will describe a theory of network centric systems based on probabilistic and information-theoretic principles which has been developed and demonstrated over the past decade. The talk will first describe the development of the decentralised information filter, a state-estimation algorithm which can be decentralised onto a sensor network of arbitrary size. This allows a decentralised network to engage in problems of tracking or parameter estimation. The information filter naturally leads to the use of mutual information gain to decide what and how to communicate information between agents in the network. Mutual information gain also provides a method of dealing with dynamic networks, intermittent communication and bandwidth constraints. Finally, the abstraction of agent function in an information form provides a quantitative means of controlling sensing operations. This can be exploited locally to provide an "information seeking" behaviour for sensors, and globally to allow cuing and hand-off behaviours to be developed between cooperating sensors. The talk will describe a series of projects that have been undertaken to demonstrate this network systems theory. These include the development of a fully modular navigation and control system for a mobile robot, and a current programme to fly multiple unmanned air vehicles (UAVs), housing multiple modular sensor payloads, in decentralised form.

Hugh Durrant-Whyte received the B.Sc. (Eng.) degree (1 st class honours) in Mechanical and Nuclear Engineering from the University of London, U.K., in 1983, the M.S.E. and Ph.D. degrees, both in Systems Engineering, from the University of Pennsylvania, U.S.A., in 1985 and 1986, respectively. From 1987 to 1995, he was a Senior Lecturer in Engineering Science, the University of Oxford, U.K. and a Fellow of Oriel College Oxford. Since July 1995 he has been Professor of Mechatronic Engineering at the Department of Mechanical and Mechatronic Engineering, the University of Sydney, Australia, where he leads the Australian Centre for Field Robotics, a Commonwealth Key Centre of Teaching and Research. His research work focuses on automation in cargo handling, surface and underground mining, defence, unmanned flight vehicles and autonomous sub-sea vehicles.
  Statistical Data Mining Andrew W. Moore, Carnegie Mellon University

Thurday
11:10am-12:00pm


How can we exploit massive data without resorting to statistically dubious computational compromises? How can we make routine statistical analysis sufficiently autonomous that it can be used safely by non-statisticians who have to make very important decisions?

We believe that the importance of these questions for applied statistics is increasing rapidly. In this talk we will fall well short of answering them adequately, but we will show a number of examples where steps in the right direction are due to new ways of geometrically preprocessing the data and subsequently asking it questions.

The examples will include biological weapons surveillance, high throughput drug screening, cosmological matter distribution, sky survey anomaly detection, self-tuning engines and accomplice detection. We will then discuss the unifying algorithms which allowed these systems to be deployed rapidly and with relatively large autonomy. These involve a blend of geometric data structures such as Omohundro's Ball Trees, Callahan's Well-Separated Pair Decomposition, and Greengard's Multipole Method in conjunction with new search algorithms based on "racing" samples of data, All-Dimensions Search over aggregates of records and a kind of Higher-Order Divide-and-Conquer over datasets and query sets.


Andrew Moore is the A. Nico Habermann associate professor of Robotics and Computer Science at the School of Computer Science, Carnegie Mellon University. His main research interests are reinforcement learning and data mining, especially data structures and algorithms to allow them to scale to large domains. The Auton Lab, co-directed by Andrew Moore and Jeff Schneider, works with Astrophysicists, Biologists, Marketing Groups, Bioinformaticists, Manufacturers and Chemical Engineers. He is funded partly from industry, and also thanks to research grants from the National Science Foundation, NASA, and more recently from the Defense Advanced Research Projects Agency to work on data mining for Biosurveillance and for helping intelligence analysts.

Andrew began his career writing video-games for an obscure British personal computer. He rapidly became a thousandaire and retired to academia, where he received a PhD from the University of Cambridge in 1991. He researched robot learning as a Post-doc working with Chris Atkeson, and then moved to a faculty position at Carnegie Mellon.


About this Webpage

For issues regarding page design and content, contact Alexander Gray. For issues regarding forms, scripts and server operation, contact Guy Lebanon.