Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Heavy Tails in ML: Structure, Stability, Dynamics

High-dimensional robust regression under heavy-tailed data: Asymptotics and Universality

Urte Adomaityte · Leonardo Defilippis · Bruno Loureiro · Gabriele Sicuro

Keywords: [ Robust Regression ] [ Huber loss ] [ Heavy-tail contamination ] [ Replica trick ] [ M-estimation ] [ high-dimensional statistics ]


Abstract: We investigate the high-dimensional properties of robust regression estimators in the presence of heavy-tailed contamination of both the covariates and response functions. In particular, we provide a sharp asymptotic characterisation of M-estimators trained on a family of elliptical covariate and noise data distributions including cases where second and higher moments do not exist. We show that, despite being consistent, the Huber loss with optimally tuned location parameter $\delta$ is suboptimal in the high-dimensional regime in the presence of heavy-tailed noise, necessitating regularisation for optimal performance. This result also uncovers the existence of a curious transition in $\delta$ as a function of the sample complexity and contamination. Moreover, we derive the decay rates for the excess risk of ridge regression. We show that, while it is optimal and universal for noise distributions with finite second moment, its decay rate can be considerably faster when the covariates' second moment does not exist. Finally, we show that our formulas readily generalise to a richer family of models and data distributions, such as generalised linear estimation with arbitrary convex regularisation trained on mixture models.

Chat is not available.