Timezone: »
Poster
Towards Enabling Meta-Learning from Target Models
Su Lu · Han-Jia Ye · Le Gan · De-Chuan Zhan
Meta-learning can extract an inductive bias from previous learning experience and assist the training of new tasks. It is often realized through optimizing a meta-model with the evaluation loss of task-specific solvers. Most existing algorithms sample non-overlapping $\mathit{support}$ sets and $\mathit{query}$ sets to train and evaluate the solvers respectively due to simplicity ($\mathcal{S}$/$\mathcal{Q}$ protocol). Different from $\mathcal{S}$/$\mathcal{Q}$ protocol, we can also evaluate a task-specific solver by comparing it to a target model $\mathcal{T}$, which is the optimal model for this task or a model that behaves well enough on this task ($\mathcal{S}$/$\mathcal{T}$ protocol). Although being short of research, $\mathcal{S}$/$\mathcal{T}$ protocol has unique advantages such as offering more informative supervision, but it is computationally expensive. This paper looks into this special evaluation method and takes a step towards putting it into practice. We find that with a small ratio of tasks armed with target models, classic meta-learning algorithms can be improved a lot without consuming many resources. We empirically verify the effectiveness of $\mathcal{S}$/$\mathcal{T}$ protocol in a typical application of meta-learning, $\mathit{i.e.}$, few-shot learning. In detail, after constructing target models by fine-tuning the pre-trained network on those hard tasks, we match the task-specific solvers and target models via knowledge distillation.
Author Information
Su Lu (Nanjing University)
Han-Jia Ye (Nanjing University)
Le Gan (Nanjing University)
De-Chuan Zhan (Nanjing University)
More from the Same Authors
-
2021 Spotlight: A$^2$-Net: Learning Attribute-Aware Hash Codes for Large-Scale Fine-Grained Image Retrieval »
Xiu-Shen Wei · Yang Shen · Xuhao Sun · Han-Jia Ye · Jian Yang -
2022 Poster: Asymmetric Temperature Scaling Makes Larger Networks Teach Well Again »
Xin-Chun Li · Wen-shu Fan · Shaoming Song · Yinchuan Li · bingshuai Li · Shao Yunfeng · De-Chuan Zhan -
2023 Poster: Model Spider: Learning to Rank Pre-Trained Models Efficiently »
Yi-Kai Zhang · Ting-Ji Huang · Yao-Xiang Ding · De-Chuan Zhan · Han-Jia Ye -
2023 Poster: Few-Shot Class-Incremental Learning via Training-Free Prototype Calibration »
Qi-wei Wang · Da-Wei Zhou · Yi-Kai Zhang · De-Chuan Zhan · Han-Jia Ye -
2023 Poster: Beyond probability partitions: Calibrating neural networks with semantic aware grouping »
Jia-Qi Yang · De-Chuan Zhan · Le Gan -
2022 Spotlight: Asymmetric Temperature Scaling Makes Larger Networks Teach Well Again »
Xin-Chun Li · Wen-shu Fan · Shaoming Song · Yinchuan Li · bingshuai Li · Shao Yunfeng · De-Chuan Zhan -
2022 Spotlight: Lightning Talks 5A-2 »
Qiang LI · Zhiwei Xu · Jia-Qi Yang · Thai Hung Le · Haoxuan Qu · Yang Li · Artyom Sorokin · Peirong Zhang · Mira Finkelstein · Nitsan levy · Chung-Yiu Yau · dapeng li · Thommen Karimpanal George · De-Chuan Zhan · Nazar Buzun · Jiajia Jiang · Li Xu · Yichuan Mo · Yujun Cai · Yuliang Liu · Leonid Pugachev · Bin Zhang · Lucy Liu · Hoi-To Wai · Liangliang Shi · Majid Abdolshah · Yoav Kolumbus · Lin Geng Foo · Junchi Yan · Mikhail Burtsev · Lianwen Jin · Yuan Zhan · Dung Nguyen · David Parkes · Yunpeng Baiia · Jun Liu · Kien Do · Guoliang Fan · Jeffrey S Rosenschein · Sunil Gupta · Sarah Keren · Svetha Venkatesh -
2022 Spotlight: Generalized Delayed Feedback Model with Post-Click Information in Recommender Systems »
Jia-Qi Yang · De-Chuan Zhan -
2022 Poster: Generalized Delayed Feedback Model with Post-Click Information in Recommender Systems »
Jia-Qi Yang · De-Chuan Zhan -
2021 Poster: A$^2$-Net: Learning Attribute-Aware Hash Codes for Large-Scale Fine-Grained Image Retrieval »
Xiu-Shen Wei · Yang Shen · Xuhao Sun · Han-Jia Ye · Jian Yang -
2016 Poster: What Makes Objects Similar: A Unified Multi-Metric Learning Approach »
Han-Jia Ye · De-Chuan Zhan · Xue-Min Si · Yuan Jiang · Zhi-Hua Zhou