Timezone: »
Transfer learning has emerged as a powerful technique for improving the performance of machine learning models on new domains where labeled training data may be scarce. In this approach a model trained for a source task, where plenty of labeled training data is available, is used as a starting point for training a model on a related target task with only few labeled training data. Despite recent empirical success of transfer learning approaches, the benefits and fundamental limits of transfer learning are poorly understood. In this paper we develop a statistical minimax framework to characterize the fundamental limits of transfer learning in the context of regression with linear and one-hidden layer neural network models. Specifically, we derive a lower-bound for the target generalization error achievable by any algorithm as a function of the number of labeled source and target data as well as appropriate notions of similarity between the source and target tasks. Our lower bound provides new insights into the benefits and limitations of transfer learning. We further corroborate our theoretical finding with various experiments.
Author Information
Mohammadreza Mousavi Kalan (University of Southern California)
Zalan Fabian (University of Southern California)
Salman Avestimehr (University of Southern California)
Mahdi Soltanolkotabi (University of Southern california)
More from the Same Authors
-
2020 : On Polynomial Approximations for Privacy-Preserving and Verifiable ReLU Networks »
Salman Avestimehr -
2021 : Basil: A Fast and Byzantine-Resilient Approach for Decentralized Training »
Ahmed Elkordy · Saurav Prakash · Salman Avestimehr -
2021 : Secure Aggregation for Buffered Asynchronous Federated Learning »
Jinhyun So · Ramy Ali · Basak Guler · Salman Avestimehr -
2021 : FairFed: Enabling Group Fairness in Federated Learning »
Yahya Ezzeldin · Shen Yan · Chaoyang He · Emilio Ferrara · Salman Avestimehr -
2022 : Federated Learning of Large Models at the Edge via Principal Sub-Model Training »
Yue Niu · Saurav Prakash · Souvik Kundu · Sunwoo Lee · Salman Avestimehr -
2022 : Federated Sparse Training: Lottery Aware Model Compression for Resource Constrained Edge »
Sara Babakniya · Souvik Kundu · Saurav Prakash · Yue Niu · Salman Avestimehr -
2022 : pFLSynth: Personalized Federated Learning of Image Synthesis in Multi-Contrast MRI »
Onat Dalmaz · Muhammad U Mirza · Gökberk Elmas · Muzaffer Özbey · Salman Ul Hassan Dar · Emir Ceyani · Salman Avestimehr · Tolga Cukur -
2022 : pFLSynth: Personalized Federated Learning of Image Synthesis in Multi-Contrast MRI »
Onat Dalmaz · Muhammad U Mirza · Gökberk Elmas · Muzaffer Özbey · Salman Ul Hassan Dar · Emir Ceyani · Salman Avestimehr · Tolga Cukur -
2022 Spotlight: Self-Aware Personalized Federated Learning »
Huili Chen · Jie Ding · Eric W. Tramel · Shuang Wu · Anit Kumar Sahu · Salman Avestimehr · Tao Zhang -
2022 : LightVeriFL: Lightweight and Verifiable Secure Federated Learning »
Baturalp Buyukates · Jinhyun So · Hessam Mahdavifar · Salman Avestimehr -
2022 Poster: Self-Aware Personalized Federated Learning »
Huili Chen · Jie Ding · Eric W. Tramel · Shuang Wu · Anit Kumar Sahu · Salman Avestimehr · Tao Zhang -
2022 Poster: HUMUS-Net: Hybrid Unrolled Multi-scale Network Architecture for Accelerated MRI Reconstruction »
Zalan Fabian · Berk Tinaz · Mahdi Soltanolkotabi -
2022 Poster: FLamby: Datasets and Benchmarks for Cross-Silo Federated Learning in Realistic Healthcare Settings »
Jean Ogier du Terrail · Samy-Safwan Ayed · Edwige Cyffers · Felix Grimberg · Chaoyang He · Regis Loeb · Paul Mangold · Tanguy Marchand · Othmane Marfoq · Erum Mushtaq · Boris Muzellec · Constantin Philippenko · Santiago Silva · Maria Teleńczuk · Shadi Albarqouni · Salman Avestimehr · Aurélien Bellet · Aymeric Dieuleveut · Martin Jaggi · Sai Praneeth Karimireddy · Marco Lorenzi · Giovanni Neglia · Marc Tommasi · Mathieu Andreux -
2021 Workshop: Workshop on Deep Learning and Inverse Problems »
Reinhard Heckel · Paul Hand · Rebecca Willett · christopher metzler · Mahdi Soltanolkotabi -
2020 Poster: Theoretical Insights Into Multiclass Classification: A High-dimensional Asymptotic View »
Christos Thrampoulidis · Samet Oymak · Mahdi Soltanolkotabi -
2020 Poster: Group Knowledge Transfer: Federated Learning of Large CNNs at the Edge »
Chaoyang He · Murali Annavaram · Salman Avestimehr -
2020 Poster: A Scalable Approach for Privacy-Preserving Collaborative Machine Learning »
Jinhyun So · Basak Guler · Salman Avestimehr -
2019 : Denoising via Early Stopping »
Mahdi Soltanolkotabi -
2018 Poster: Pipe-SGD: A Decentralized Pipelined SGD Framework for Distributed Deep Net Training »
Youjie Li · Mingchao Yu · Songze Li · Salman Avestimehr · Nam Sung Kim · Alex Schwing -
2018 Poster: GradiVeQ: Vector Quantization for Bandwidth-Efficient Gradient Aggregation in Distributed CNN Training »
Mingchao Yu · Zhifeng Lin · Krishna Narra · Songze Li · Youjie Li · Nam Sung Kim · Alex Schwing · Murali Annavaram · Salman Avestimehr -
2017 Poster: Polynomial Codes: an Optimal Design for High-Dimensional Coded Matrix Multiplication »
Qian Yu · Mohammad Maddah-Ali · Salman Avestimehr -
2017 Poster: Learning ReLUs via Gradient Descent »
Mahdi Soltanolkotabi -
2017 Poster: Gradient Methods for Submodular Maximization »
Hamed Hassani · Mahdi Soltanolkotabi · Amin Karbasi