Timezone: »

 
Tutorial
Statistical Learning Theory: a Hitchhiker's Guide
John Shawe-Taylor · Omar Rivasplata

Mon Dec 03 11:30 AM -- 01:30 PM (PST) @ Room 220 E

The tutorial will showcase what statistical learning theory aims to assess about and hence deliver for learning systems. We will highlight how algorithms can piggy back on its results to improve the performances of learning algorithms as well as to understand their limitations. The tutorial is aimed at those wishing to gain an understanding of the value and role of statistical learning theory in order to hitch a ride on its results.

Author Information

John Shawe-Taylor (UCL)

John Shawe-Taylor has contributed to fields ranging from graph theory through cryptography to statistical learning theory and its applications. However, his main contributions have been in the development of the analysis and subsequent algorithmic definition of principled machine learning algorithms founded in statistical learning theory. This work has helped to drive a fundamental rebirth in the field of machine learning with the introduction of kernel methods and support vector machines, driving the mapping of these approaches onto novel domains including work in computer vision, document classification, and applications in biology and medicine focussed on brain scan, immunity and proteome analysis. He has published over 300 papers and two books that have together attracted over 60000 citations. He has also been instrumental in assembling a series of influential European Networks of Excellence. The scientific coordination of these projects has influenced a generation of researchers and promoted the widespread uptake of machine learning in both science and industry that we are currently witnessing.

Omar Rivasplata (UCL / DeepMind)

Omar Rivasplata is researching the connection between stability of learning algorithms and their future performance properties, as guaranteed by PAC generalization bounds and PAC-Bayes inequalities. He is interested in matching sound theory to practice, and aims to contribute to understanding automated learning systems in a theoretically well-founded fashion. Previous research includes game theory, reversibility of Markov diffusions, and invertibility of sparse random matrices. Omar is currently a grad student at UCL Computer Science and a research intern at DeepMind.

More from the Same Authors