Timezone: »

 
Workshop
Bounded-rational analyses of human cognition: Bayesian models, approximate inference, and the brain
Noah Goodman · Edward Vul · Tom Griffiths · Josh Tenenbaum

Sat Dec 12 07:30 AM -- 06:30 PM (PST) @ Westin: Alpine BC
Event URL: http://www.mit.edu/~ndg/NIPS09Workshop.html »

Bayesian, or "rational", accounts of human cognition have enjoyed much success in recent years: human behavior is well described by probabilistic inference in low-level perceptual and motor tasks as well as high level cognitive tasks like category and concept learning, language, and theory of mind. However, these models are typically defined at the abstract "computational" level: they successfully describe the computational task solved by human cognition without committing to the algorithm which carries it out. Bayesian models usually assume unbounded cognitive resources available for computation, yet traditional cognitive psychology has emphasized the severe limitations of human cognition. Thus, a key challenge for the Bayesian approach to cognition is to describe the algorithms used to cary out approximate probabilistic inference using the bounded computational resources of the human brain.

Inspired by the success of Monte Carlo methods in machine learning, several different groups have suggested that humans make inferences not by manipulating whole distributions, but my drawing a small number of samples from the appropriate posterior distribution. Monte Carlo algorithms are attractive as algorithmic models of cognition both because of they have been used to do inference in a wide variety of structured probabilistic models, scaling to complex situations while minimizing the curse of dimensionality, and because they use resources efficiently, and degrade gracefully when time does not permit many samples to be generated. Indeed, given parsimonious assumptions about the cost of obtaining a sample for a bounded agent, it is often best to make decisions using just one sample.

The claim that human cognition works by sampling identifies the broad class of Monte Carlo algorithms as candidate cognitive process models. Recent evidence from human behavior supports this coarse description of human inference: people seem to operate with a limited set of samples at a time. Further narrowing the class of algorithm makes additional predictions if the samples drawn by these algorithms are imperfect samples (not exact samples from the posterior distribution). That is, while most Monte Carlo algorithms yield unbiased estimators given unlimited resources, they all have characteristic biases and dynamics in practice -- it is these biases and dynamics which result in process-level predictions about human cognition. For instance, it has been argued that the characteristic order effects exhibited by sequential Monte Carlo algorithms (particle filters) when run with few particles can explain the primacy and recency effects observed in human category learning, and the "garden path" phenomena of human sentence processing. Similarly, others have argued that the temporal correlation of samples obtained from Markov Chain Monte Carlo (MCMC) sampling can account for bistable percepts in visual processing.

Ultimately the processes of human cognition must be implemented in the brain. Relatively little work has examined how probabilistic inference may be carried out by neural mechanisms, and even less of this work has been based on Monte Carlo algorithms. Several different neural implementations of probabilistic inference, both approximate and exact, have been proposed, but the relationship among these implementations and to algorithmic and behavioral constraints remains to be understood. Accordingly, this workshop will foster discussion of neural implementations in light of work on bounded-rational cognitive processes.

The goal of this workshop is to explore the connections between Bayesian models of cognition, human cognitive processes, modern inference algorithms, and neural information processing. We believe that this will be an exciting opportunity to make progress on a set of interlocking questions: Can we derive precise predictions about the dynamics of human cognition from state-of-the-art inference algorithms? Can machine learning be improved by understanding the efficiency tradeoffs made by human cognition? Can descriptions of neural behavior be constrained by theories of human inference processes?

Author Information

Noah Goodman (Massachusetts Institute of Technology)
Edward Vul (Massachusetts Institute of Technology)
Tom Griffiths (Princeton)
Josh Tenenbaum (MIT)

Josh Tenenbaum is an Associate Professor of Computational Cognitive Science at MIT in the Department of Brain and Cognitive Sciences and the Computer Science and Artificial Intelligence Laboratory (CSAIL). He received his PhD from MIT in 1999, and was an Assistant Professor at Stanford University from 1999 to 2002. He studies learning and inference in humans and machines, with the twin goals of understanding human intelligence in computational terms and bringing computers closer to human capacities. He focuses on problems of inductive generalization from limited data -- learning concepts and word meanings, inferring causal relations or goals -- and learning abstract knowledge that supports these inductive leaps in the form of probabilistic generative models or 'intuitive theories'. He has also developed several novel machine learning methods inspired by human learning and perception, most notably Isomap, an approach to unsupervised learning of nonlinear manifolds in high-dimensional data. He has been Associate Editor for the journal Cognitive Science, has been active on program committees for the CogSci and NIPS conferences, and has co-organized a number of workshops, tutorials and summer schools in human and machine learning. Several of his papers have received outstanding paper awards or best student paper awards at the IEEE Computer Vision and Pattern Recognition (CVPR), NIPS, and Cognitive Science conferences. He is the recipient of the New Investigator Award from the Society for Mathematical Psychology (2005), the Early Investigator Award from the Society of Experimental Psychologists (2007), and the Distinguished Scientific Award for Early Career Contribution to Psychology (in the area of cognition and human learning) from the American Psychological Association (2008).

More from the Same Authors