Timezone: »

 
Progress in Self-Certified Neural Networks
Maria Perez-Ortiz · Omar Rivasplata · Emilio Parrado-Hernández · Benjamin Guedj · John Shawe-Taylor
Event URL: https://openreview.net/forum?id=gHht0LOPBDy »

A learning method is self-certified if it uses all available data to simultaneously learn a predictor and certify its quality with a statistical certificate that is valid on unseen data. Recent work has shown that neural network models trained by optimising PAC-Bayes bounds lead not only to accurate predictors, but also to tight risk certificates, bearing promise towards self-certified learning. In this context, learning and certification strategies based on PAC-Bayes bounds are especially attractive due to their ability to leverage all data to learn a posterior and simultaneously certify its risk. In this paper, we assess the progress towards self-certification in neural networks learnt by PAC-Bayes inspired objectives. We empirically compare (on 4 classification datasets) classical test set bounds for deterministic predictors and a PAC-Bayes bound for randomised self-certified predictors. We show that in data starvation regimes, holding out data for the test set bounds adversely affects generalisation performance, while learning and certification strategies based on PAC-Bayes bounds do not suffer from this drawback. We find that probabilistic neural networks learnt by PAC-Bayes inspired objectives lead to certificates that can be surprisinglycompetitive with commonly used test set bounds.

Author Information

Maria Perez-Ortiz (University College London)
Omar Rivasplata (IMSS UCL)

My top-level areas of interest are statistical learning theory, machine learning, probability and statistics. These days I am very interested in deep learning and reinforcement learning. I am a Senior Research Fellow at the Department of Statistical Science, University College London. Before my current post I was for a few months at the Department of Mathematics at UCL. Previously I was for a few years at the Department of Computer Science at UCL, where I did research studies in statistical machine learning, sponsored by DeepMind. In parallel with these studies I was a research scientist intern at DeepMind for three years. Back in the day I studied undergraduate maths (BSc 2000, Pontificia Universidad Católica del Perú) and graduate maths (MSc 2005, PhD 2012, University of Alberta). I've lived in Peru, in Canada, and now I'm based in the UK.

Emilio Parrado-Hernández
Benjamin Guedj (Inria & University College London)

Benjamin Guedj is a tenured research scientist at Inria since 2014, affiliated to the Lille - Nord Europe research centre in France. He is also affiliated with the mathematics department of the University of Lille. Since 2018, he is a Principal Research Fellow at the Centre for Artificial Intelligence and Department of Computer Science at University College London. He is also a visiting researcher at The Alan Turing Institute. Since 2020, he is the founder and scientific director of The Inria London Programme, a strategic partnership between Inria and UCL as part of a France-UK scientific initiative. He obtained his Ph.D. in mathematics in 2013 from UPMC (Université Pierre & Marie Curie, France) under the supervision of Gérard Biau and Éric Moulines. Prior to that, he was a research assistant at DTU Compute (Denmark). His main line of research is in statistical machine learning, both from theoretical and algorithmic perspectives. He is primarily interested in the design, analysis and implementation of statistical machine learning methods for high dimensional problems, mainly using the PAC-Bayesian theory.

John Shawe-Taylor (UCL)

John Shawe-Taylor has contributed to fields ranging from graph theory through cryptography to statistical learning theory and its applications. However, his main contributions have been in the development of the analysis and subsequent algorithmic definition of principled machine learning algorithms founded in statistical learning theory. This work has helped to drive a fundamental rebirth in the field of machine learning with the introduction of kernel methods and support vector machines, driving the mapping of these approaches onto novel domains including work in computer vision, document classification, and applications in biology and medicine focussed on brain scan, immunity and proteome analysis. He has published over 300 papers and two books that have together attracted over 60000 citations. He has also been instrumental in assembling a series of influential European Networks of Excellence. The scientific coordination of these projects has influenced a generation of researchers and promoted the widespread uptake of machine learning in both science and industry that we are currently witnessing.

More from the Same Authors