Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Bayesian Deep Learning

Progress in Self-Certified Neural Networks

Maria Perez-Ortiz · Omar Rivasplata · Emilio Parrado-Hernández · Benjamin Guedj · John Shawe-Taylor


Abstract:

A learning method is self-certified if it uses all available data to simultaneously learn a predictor and certify its quality with a statistical certificate that is valid on unseen data. Recent work has shown that neural network models trained by optimising PAC-Bayes bounds lead not only to accurate predictors, but also to tight risk certificates, bearing promise towards self-certified learning. In this context, learning and certification strategies based on PAC-Bayes bounds are especially attractive due to their ability to leverage all data to learn a posterior and simultaneously certify its risk. In this paper, we assess the progress towards self-certification in neural networks learnt by PAC-Bayes inspired objectives. We empirically compare (on 4 classification datasets) classical test set bounds for deterministic predictors and a PAC-Bayes bound for randomised self-certified predictors. We show that in data starvation regimes, holding out data for the test set bounds adversely affects generalisation performance, while learning and certification strategies based on PAC-Bayes bounds do not suffer from this drawback. We find that probabilistic neural networks learnt by PAC-Bayes inspired objectives lead to certificates that can be surprisinglycompetitive with commonly used test set bounds.

Chat is not available.