`

Timezone: »

 
PCA Subspaces Are Not Always Optimal for Bayesian Learning
Alexandre Bense · Amir Joudaki · Tim G. J. Rudner · Vincent Fortuin
Event URL: https://openreview.net/forum?id=iPYHorHDtPh »

Bayesian Neural Networks are often sought after for their strong and trustworthy predictive power. However, inference in these models is often computationally expensive and can be reduced using dimensionality reduction where the key goal is to find an appropriate subspace in which to perform the inference, while retaining significant predictive power. In this work, we propose a theoretical comparative study of the Principal Component Analysis versus the random projection for Bayesian Linear Regression. We find that the PCA is not always the optimal dimensionality reduction method and that the random projection can actually be superior, especially in cases where the data distribution is shifted and the labels have a small norm. We then confirm these results experimentally. Therefore, this work suggests to consider dimension reduction by random projection for Bayesian inference when noisy data are expected.

Author Information

Alexandre Bense (Swiss Federal Institute of Technology)
Amir Joudaki (Swiss Federal Institute of Technology)
Tim G. J. Rudner (University of Oxford)
Vincent Fortuin (ETH Zürich)

More from the Same Authors