Skip to yearly menu bar Skip to main content


Poster

PAC-Bayes under potentially heavy tails

Matthew Holland

East Exhibition Hall B + C #227

Keywords: [ Theory ] [ Learning Theory ]


Abstract:

We derive PAC-Bayesian learning guarantees for heavy-tailed losses, and obtain a novel optimal Gibbs posterior which enjoys finite-sample excess risk bounds at logarithmic confidence. Our core technique itself makes use of PAC-Bayesian inequalities in order to derive a robust risk estimator, which by design is easy to compute. In particular, only assuming that the first three moments of the loss distribution are bounded, the learning algorithm derived from this estimator achieves nearly sub-Gaussian statistical error, up to the quality of the prior.

Live content is unavailable. Log in and register to view live content