Timezone: »
Regularization and transfer learning are two popular techniques to enhance model generalization on unseen data, which is a fundamental problem of machine learning. Regularization techniques are versatile, as they are task- and architecture-agnostic, but they do not exploit a large amount of data available. Transfer learning methods learn to transfer knowledge from one domain to another, but may not generalize across tasks and architectures, and may introduce new training cost for adapting to the target task. To bridge the gap between the two, we propose a transferable perturbation, MetaPerturb, which is meta-learned to improve generalization performance on unseen data. MetaPerturb is implemented as a set-based lightweight network that is agnostic to the size and the order of the input, which is shared across the layers. Then, we propose a meta-learning framework, to jointly train the perturbation function over heterogeneous tasks in parallel. As MetaPerturb is a set-function trained over diverse distributions across layers and tasks, it can generalize to heterogeneous tasks and architectures. We validate the efficacy and generality of MetaPerturb trained on a specific source domain and architecture, by applying it to the training of diverse neural architectures on heterogeneous target datasets against various regularizers and fine-tuning. The results show that the networks trained with MetaPerturb significantly outperform the baselines on most of the tasks and architectures, with a negligible increase in the parameter size and no hyperparameters to tune.
Author Information
Jeong Un Ryu (KAIST)
JaeWoong Shin (KAIST)
Hae Beom Lee (KAIST)
Sung Ju Hwang (KAIST, AITRICS)
Related Events (a corresponding poster, oral, or spotlight)
-
2020 Spotlight: MetaPerturb: Transferable Regularizer for Heterogeneous Tasks and Architectures »
Wed. Dec 9th 03:00 -- 03:10 AM Room Orals & Spotlights: Deep Learning/Theory
More from the Same Authors
-
2022 : Distortion-Aware Network Pruning and Feature Reuse for Real-time Video Segmentation »
Hyunsu Rhee · Dongchan Min · Sunil Hwang · Bruno Andreis · Sung Ju Hwang -
2022 : Targeted Adversarial Self-Supervised Learning »
Minseon Kim · Hyeonjeong Ha · Sooel Son · Sung Ju Hwang -
2022 : Few-Shot Transferable Robust Representation Learning via Bilevel Attacks »
Minseon Kim · Hyeonjeong Ha · Sung Ju Hwang -
2020 Poster: Bootstrapping neural processes »
Juho Lee · Yoonho Lee · Jungtaek Kim · Eunho Yang · Sung Ju Hwang · Yee Whye Teh -
2020 Poster: Distribution Aligning Refinery of Pseudo-label for Imbalanced Semi-supervised Learning »
Jaehyung Kim · Youngbum Hur · Sejun Park · Eunho Yang · Sung Ju Hwang · Jinwoo Shin -
2020 Poster: Learning to Extrapolate Knowledge: Transductive Few-shot Out-of-Graph Link Prediction »
Jinheon Baek · Dong Bok Lee · Sung Ju Hwang -
2020 Poster: Time-Reversal Symmetric ODE Network »
In Huh · Eunho Yang · Sung Ju Hwang · Jinwoo Shin -
2020 Poster: Neural Complexity Measures »
Yoonho Lee · Juho Lee · Sung Ju Hwang · Eunho Yang · Seungjin Choi -
2020 Poster: Adversarial Self-Supervised Contrastive Learning »
Minseon Kim · Jihoon Tack · Sung Ju Hwang -
2020 Poster: Few-shot Visual Reasoning with Meta-Analogical Contrastive Learning »
Youngsung Kim · Jinwoo Shin · Eunho Yang · Sung Ju Hwang -
2020 Poster: Attribution Preservation in Network Compression for Reliable Network Interpretation »
Geondo Park · June Yong Yang · Sung Ju Hwang · Eunho Yang -
2018 Poster: Uncertainty-Aware Attention for Reliable Interpretation and Prediction »
Jay Heo · Hae Beom Lee · Saehoon Kim · Juho Lee · Kwang Joon Kim · Eunho Yang · Sung Ju Hwang -
2018 Poster: Joint Active Feature Acquisition and Classification with Variable-Size Set Encoding »
Hajin Shim · Sung Ju Hwang · Eunho Yang -
2018 Poster: DropMax: Adaptive Variational Softmax »
Hae Beom Lee · Juho Lee · Saehoon Kim · Eunho Yang · Sung Ju Hwang