Timezone: »

Hypothesis Transfer Learning via Transformation Functions
Simon Du · Jayanth Koushik · Aarti Singh · Barnabas Poczos

Wed Dec 06 06:30 PM -- 10:30 PM (PST) @ Pacific Ballroom #38

We consider the Hypothesis Transfer Learning (HTL) problem where one incorporates a hypothesis trained on the source domain into the learning procedure of the target domain. Existing theoretical analysis either only studies specific algorithms or only presents upper bounds on the generalization error but not on the excess risk. In this paper, we propose a unified algorithm-dependent framework for HTL through a novel notion of transformation functions, which characterizes the relation between the source and the target domains. We conduct a general risk analysis of this framework and in particular, we show for the first time, if two domains are related, HTL enjoys faster convergence rates of excess risks for Kernel Smoothing and Kernel Ridge Regression than those of the classical non-transfer learning settings. We accompany this framework with an analysis of cross-validation for HTL to search for the best transfer technique and gracefully reduce to non-transfer learning when HTL is not helpful. Experiments on robotics and neural imaging data demonstrate the effectiveness of our framework.

Author Information

Simon Du (Carnegie Mellon University)
Jayanth Koushik (Carnegie Mellon University)
Aarti Singh (CMU)
Barnabas Poczos (Carnegie Mellon University)

More from the Same Authors