Skip to yearly menu bar Skip to main content


Poster

A Theoretical Analysis of the Test Error of Finite-Rank Kernel Ridge Regression

Tin Sum Cheng · Aurelien Lucchi · Anastasis Kratsios · Ivan Dokmanić · David Belius

Great Hall & Hall B1+B2 (level 1) #1725
[ ]
[ Paper [ Poster [ OpenReview
Thu 14 Dec 8:45 a.m. PST — 10:45 a.m. PST

Abstract:

Existing statistical learning guarantees for general kernel regressors often yield loose bounds when used with finite-rank kernels. Yet, finite-rank kernels naturally appear in a number of machine learning problems, e.g. when fine-tuning a pre-trained deep neural network's last layer to adapt it to a novel task when performing transfer learning. We address this gap for finite-rank kernel ridge regression (KRR) by deriving sharp non-asymptotic upper and lower bounds for the KRR test error of any finite-rank KRR. Our bounds are tighter than previously derived bounds on finite-rank KRR and, unlike comparable results, they also remain valid for any regularization parameters.

Chat is not available.