Skip to yearly menu bar Skip to main content


On the Frequency-bias of Coordinate-MLPs

Sameera Ramasinghe · Lachlan E. MacDonald · Simon Lucey

Hall J (level 1) #429

Keywords: [ implicit regularization ] [ Implicit neural representations ] [ Coordinate Networks ]


We show that typical implicit regularization assumptions for deep neural networks (for regression) do not hold for coordinate-MLPs, a family of MLPs that are now ubiquitous in computer vision for representing high-frequency signals. Lack of such implicit bias disrupts smooth interpolations between training samples, and hampers generalizing across signal regions with different spectra. We investigate this behavior through a Fourier lens and uncover that as the bandwidth of a coordinate-MLP is enhanced, lower frequencies tend to get suppressed unless a suitable prior is provided explicitly. Based on these insights, we propose a simple regularization technique that can mitigate the above problem, which can be incorporated into existing networks without any architectural modifications.

Chat is not available.