Microsoft Research New Engand
Agnostic Learning: Algorithms and Theory
1:00 – 3:00pm Monday, December 08, 2008
Agnostic Learning (Kearns, Schapire and Sellie '92; Haussler '90) is a branch of Computational Learning Theory in which little or no assumptions are made about the true function being learned. The results are powerful, noise-tolerant binary-classification algorithms. Like Valiant's seminal PAC model, the algorithms are required to be computationally efficient. Unlike the PAC model, it does not assume anything about the "true" labeling function, and hence Agnostic Learning can also be viewed as learning with arbitrary or even adversarial noise. Recent progress has shown that several classical algorithms can be improved and made agnostic. Moreover, the agnostic perspective sheds light on several key Machine Learning notions, such as Fourier learning, SVMs, Active Learning, Decision Tree Learning, and Boosting. Hence, this tutorial on Agnostic Learning also presents an opportunity to overview and reconsider many beautiful notions from Computational Learning Theory.
Adam Tauman Kalai received his BA (1996) from Harvard, and MA (1998) and PhD (2001) under the supervision of Avrim Blum from CMU. After an NSF postdoctoral fellowship at M.I.T. with Santosh Vempala, he served as an assistant professor at the Toyota Technological institute at Chicago and then at Georgia Tech. He is now a Senior Research Scientist at Microsoft Research New England. His honors include an NSF CAREER award, and an Alfred P. Sloan fellowship. His research focuses on computational learning theory, game theory, algorithms, and online optimization.