Skip to yearly menu bar Skip to main content


Poster
in
Workshop: NeurIPS 2023 Workshop: Machine Learning and the Physical Sciences

Understanding Pathologies of Deep Heteroskedastic Regression

Eliot Wong-Toi · Alex Boyd · Vincent Fortuin · Stephan Mandt


Abstract:

Recent studies have reported negative results when using heteroskedastic neural regression models. In particular, for overparameterized models, the mean and variance networks are powerful enough to either fit every single data point, or to learn a constant prediction with an output variance exactly matching every predicted residual, explaining the targets as pure noise. We study these difficulties from the perspective of statistical physics and show that the observed instabilities are not specific to any neural network architecture but are already present in a field theory of an overparameterized conditional Gaussian likelihood model. Under light assumptions, we derive a nonparametric free energy that can be solved numerically. The resulting solutions show excellent qualitative agreement with empirical model fits on real-world data and, in particular, prove the existence of phase transitions, i.e., abrupt, qualitative differences in the behaviors of the regressors upon varying the regularization strengths on the two networks. Our work provides a theoretical explanation for the necessity to carefully regularize heteroskedastic regression models. Moreover, the insights from our theory suggest a scheme for optimizing this regularization which is quadratically more efficient than the naive approach.

Chat is not available.