Skip to yearly menu bar Skip to main content


Optimistic Rates for Multi-Task Representation Learning

Austin Watkins · Enayat Ullah · Thanh Nguyen-Tang · Raman Arora

Great Hall & Hall B1+B2 (level 1) #1718
[ ]
Thu 14 Dec 8:45 a.m. PST — 10:45 a.m. PST

Abstract: We study the problem of transfer learning via Multi-Task Representation Learning (MTRL), wherein multiple source tasks are used to learn a good common representation, and a predictor is trained on top of it for the target task. Under standard regularity assumptions on the loss function and task diversity, we provide new statistical rates on the excess risk of the target task, which demonstrate the benefit of representation learning. Importantly, our rates are optimistic, i.e., they interpolate between the standard $O(m^{-1/2})$ rate and the fast $O(m^{-1})$ rate, depending on the difficulty of the learning task, where $m$ is the number of samples for the target task. Besides the main result, we make several new contributions, including giving optimistic rates for excess risk of source tasks (multi-task learning (MTL)), a local Rademacher complexity theorem for MTRL and MTL, as well as a chain rule for local Rademacher complexity for composite predictor classes.

Chat is not available.