Timezone: »

Transfer Learning with Affine Model Transformation
Shunya Minami · Kenji Fukumizu · Yoshihiro Hayashi · Ryo Yoshida

Wed Dec 13 08:45 AM -- 10:45 AM (PST) @ Great Hall & Hall B1+B2 #1015
Event URL: https://github.com/mshunya/AffineTL »

Supervised transfer learning has received considerable attention due to its potential to boost the predictive power of machine learning in scenarios where data are scarce. Generally, a given set of source models and a dataset from a target domain are used to adapt the pre-trained models to a target domain by statistically learning domain shift and domain-specific factors. While such procedurally and intuitively plausible methods have achieved great success in a wide range of real-world applications, the lack of a theoretical basis hinders further methodological development. This paper presents a general class of transfer learning regression called affine model transfer, following the principle of expected-square loss minimization. It is shown that the affine model transfer broadly encompasses various existing methods, including the most common procedure based on neural feature extractors. Furthermore, the current paper clarifies theoretical properties of the affine model transfer such as generalization error and excess risk. Through several case studies, we demonstrate the practical benefits of modeling and estimating inter-domain commonality and domain-specific factors separately with the affine-type transfer models.

Author Information

Shunya Minami (The Institute of Statistical Mathematics)
Kenji Fukumizu (Institute of Statistical Mathematics / Preferred Networks / RIKEN AIP)
Yoshihiro Hayashi (The Institute of Statistical Mathematics)
Ryo Yoshida (The Institute of Statistical Mathematics, Japan, Tokyo Institute of Technology)

More from the Same Authors