Loading [MathJax]/jax/output/CommonHTML/jax.js
Skip to yearly menu bar Skip to main content


Poster

PAC-Bayes Un-Expected Bernstein Inequality

Zakaria Mhammedi · Peter Grünwald · Benjamin Guedj

East Exhibition Hall B, C #234

Keywords: [ Learning Theory ] [ Theory ] [ Algorithms -> Classification; Algorithms -> Uncertainty Estimation; Theory; Theory ] [ Large Deviations and Asymptotic Analysis ]


Abstract: We present a new PAC-Bayesian generalization bound. Standard bounds contain a Ln\KL/n complexity term which dominates unless Ln, the empirical error of the learning algorithm's randomized predictions, vanishes. We manage to replace Ln by a term which vanishes in many more situations, essentially whenever the employed learning algorithm is sufficiently stable on the dataset at hand. Our new bound consistently beats state-of-the-art bounds both on a toy example and on UCI datasets (with large enough n). Theoretically, unlike existing bounds, our new bound can be expected to converge to 0 faster whenever a Bernstein/Tsybakov condition holds, thus connecting PAC-Bayesian generalization and {\em excess risk\/} bounds---for the latter it has long been known that faster convergence can be obtained under Bernstein conditions. Our main technical tool is a new concentration inequality which is like Bernstein's but with X2 taken outside its expectation.

Live content is unavailable. Log in and register to view live content