Timezone: »
Probabilistic neural networks are typically modeled with independent weight priors, which do not capture weight correlations in the prior and do not provide a parsimonious interface to express properties in function space. A desirable class of priors would represent weights compactly, capture correlations between weights, facilitate calibrated reasoning about uncertainty, and allow inclusion of prior knowledge about the function space such as periodicity or dependence on contexts such as inputs. To this end, this paper introduces two innovations: (i) a Gaussian process-based hierarchical model for network weights based on unit priors that can flexibly encode correlated weight structures, and (ii) input-dependent versions of these weight priors that can provide convenient ways to regularize the function space through the use of kernels defined on contextual inputs. We show these models provide desirable test-time uncertainty estimates on out-of-distribution data and demonstrate cases of modeling inductive biases for neural networks with kernels which help both interpolation and extrapolation from training data.
Author Information
Theofanis Karaletsos (Facebook)
Thang Bui (Uber AI / University of Sydney)
More from the Same Authors
-
2023 Poster: Sparse Additive Mechanism Shift For Disentangled Representation Learning »
Michael Bereket · Theofanis Karaletsos -
2022 Poster: Black-box coreset variational inference »
Dionysis Manousakas · Hippolyt Ritter · Theofanis Karaletsos -
2017 Poster: Streaming Sparse Gaussian Process Approximations »
Thang Bui · Cuong Nguyen · Richard Turner -
2016 Workshop: Machine Learning for Health »
Uri Shalit · Marzyeh Ghassemi · Jason Fries · Rajesh Ranganath · Theofanis Karaletsos · David Kale · Peter Schulam · Madalina Fiterau -
2015 Workshop: Machine Learning For Healthcare (MLHC) »
Theofanis Karaletsos · Rajesh Ranganath · Suchi Saria · David Sontag