Timezone: »

 
Tutorial
Bayesian Models of Human Learning and Inference
Josh Tenenbaum

Mon Dec 04 01:02 PM -- 03:00 PM (PST) @ Regency E
Event URL: http://web.mit.edu/cocosci/Talks/nips06-tutorial.ppt »

Bayesian methods have revolutionized major areas of artificial intelligence, machine learning, natural language processing and computer vision. Recently Bayesian approaches have also begun to take hold in cognitive science, as a principled framework for explaining how humans might learn, reason, perceive and communicate about their world. This tutorial will sketch some of the challenges and prospects for Bayesian models in cognitive science, and also draw some lessons for bringing probabilistic approaches to artificial intelligence closer to human-level abilities.

The focus will be on learning and reasoning tasks where people routinely make successful generalizations from very sparse evidence. These tasks include word learning and semantic interpretation, inference about unobserved properties of objects and relations between objects, reasoning about the goals of other agents, and causal learning and inference. These inferences can be modeled as Bayesian computations operating over constrained representations of world structure – what cognitive scientists have called “intuitive theories” or “schemas”. For each task, we will consider how the appropriate knowledge representations are structured, how these representations guide Bayesian learning and reasoning, and how these representations could themselves be learned via Bayesian methods. Models will be evaluated both in terms of how well they capture quantitative or qualitative patterns of human behavior, and their ability to solve analogous real-world problems of learning and inference. The models we discuss will draw on – and hopefully, offer new insights for – several directions in contemporary machine learning, including semi-supervised learning, modeling relational data, structure learning in graphical models, hierarchical Bayesian modeling, and Bayesian nonparametrics.

Author Information

Josh Tenenbaum (MIT)

Josh Tenenbaum is an Associate Professor of Computational Cognitive Science at MIT in the Department of Brain and Cognitive Sciences and the Computer Science and Artificial Intelligence Laboratory (CSAIL). He received his PhD from MIT in 1999, and was an Assistant Professor at Stanford University from 1999 to 2002. He studies learning and inference in humans and machines, with the twin goals of understanding human intelligence in computational terms and bringing computers closer to human capacities. He focuses on problems of inductive generalization from limited data -- learning concepts and word meanings, inferring causal relations or goals -- and learning abstract knowledge that supports these inductive leaps in the form of probabilistic generative models or 'intuitive theories'. He has also developed several novel machine learning methods inspired by human learning and perception, most notably Isomap, an approach to unsupervised learning of nonlinear manifolds in high-dimensional data. He has been Associate Editor for the journal Cognitive Science, has been active on program committees for the CogSci and NIPS conferences, and has co-organized a number of workshops, tutorials and summer schools in human and machine learning. Several of his papers have received outstanding paper awards or best student paper awards at the IEEE Computer Vision and Pattern Recognition (CVPR), NIPS, and Cognitive Science conferences. He is the recipient of the New Investigator Award from the Society for Mathematical Psychology (2005), the Early Investigator Award from the Society of Experimental Psychologists (2007), and the Distinguished Scientific Award for Early Career Contribution to Psychology (in the area of cognition and human learning) from the American Psychological Association (2008).

More from the Same Authors