Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Distribution shifts: connecting methods and applications (DistShift)

PCA Subspaces Are Not Always Optimal for Bayesian Learning

Alexandre Bense · Amir Joudaki · Tim G. J. Rudner · Vincent Fortuin


Abstract:

Bayesian Neural Networks are often sought after for their strong and trustworthy predictive power. However, inference in these models is often computationally expensive and can be reduced using dimensionality reduction where the key goal is to find an appropriate subspace in which to perform the inference, while retaining significant predictive power. In this work, we propose a theoretical comparative study of the Principal Component Analysis versus the random projection for Bayesian Linear Regression. We find that the PCA is not always the optimal dimensionality reduction method and that the random projection can actually be superior, especially in cases where the data distribution is shifted and the labels have a small norm. We then confirm these results experimentally. Therefore, this work suggests to consider dimension reduction by random projection for Bayesian inference when noisy data are expected.

Chat is not available.