Timezone: »
Cross-domain few-shot learning (CD-FSL) has drawn increasing attention for handling large differences between the source and target domains--an important concern in real-world scenarios. To overcome these large differences, recent works have considered exploiting small-scale unlabeled data from the target domain during the pre-training stage. This data enables self-supervised pre-training on the target domain, in addition to supervised pre-training on the source domain. In this paper, we empirically investigate which pre-training is preferred based on domain similarity and few-shot difficulty of the target domain. We discover that the performance gain of self-supervised pre-training over supervised pre-training becomes large when the target domain is dissimilar to the source domain, or the target domain itself has low few-shot difficulty. We further design two pre-training schemes, mixed-supervised and two-stage learning, that improve performance. In this light, we present six findings for CD-FSL, which are supported by extensive experiments and analyses on three source and eight target benchmark datasets with varying levels of domain similarity and few-shot difficulty. Our code is available at https://github.com/sungnyun/understanding-cdfsl.
Author Information
Jaehoon Oh (KAIST)
Sungnyun Kim (KAIST)
Namgyu Ho (KAIST)
Jin-Hwa Kim (NAVER AI Lab)
Jin-Hwa Kim has been Technical Leader and Research Scientist at NAVER AI Lab since August 2021 and Guest Assistant Professor at Artificial Intelligence Institute of Seoul National University (SNU AIIS) since August 2022. He has been studying multimodal deep learning (e.g., [visual question answering](http://visualqa.org)), multimodal generation, ethical AI, and other related topics. In 2018, he received Ph.D. from Seoul National University under the supervision of Professor [Byoung-Tak Zhang](https://bi.snu.ac.kr/~btzhang/) for the work on "Multimodal Deep Learning for Visually-grounded Reasoning." In September 2017, he received [2017 Google Ph.D. Fellowship](https://ai.googleblog.com/2017/09/highlights-from-annual-google-phd.html) in Machine Learning, Ph.D. Completion Scholarship by Seoul National University, and the VQA Challenge 2018 runners-up at the [CVPR 2018 VQA Challenge and Visual Dialog Workshop](https://visualqa.org/workshop_2018.html). He was Research Intern at [Facebook AI Research](https://research.fb.com/category/facebook-ai-research/) (Menlo Park, CA) mentored by [Yuandong Tian](http://yuandong-tian.com), [Devi Parikh](https://www.cc.gatech.edu/~parikh/), and [Dhruv Batra](https://www.cc.gatech.edu/~dbatra/), from January to May in 2017. He had worked for SK Telecom (August 2018 to July 2021) and SK Communications (January 2011 to October 2012).
Hwanjun Song (AWS AI Lab)
Se-Young Yun (KAIST)
More from the Same Authors
-
2021 : FedBABU: Towards Enhanced Representation for Federated Image Classification »
Jaehoon Oh · SangMook Kim · Se-Young Yun -
2021 : Neural Processes with Stochastic Attention: Paying more attention to the context dataset »
Mingyu Kim · KyeongRyeol Go · Se-Young Yun -
2022 : Layover Intermediate Layer for Multi-Label Classification in Efficient Transfer Learning »
Seongha Eom · Taehyeon Kim · Se-Young Yun -
2022 : Revisiting the Activation Function for Federated Image Classification »
Jaewoo Shin · Taehyeon Kim · Se-Young Yun -
2022 : Mitigating Dataset Bias by Using Per-sample Gradient »
Sumyeong Ahn · SeongYoon Kim · Se-Young Yun -
2022 : CUDA: Curriculum of Data Augmentation for Long-tailed Recognition »
Sumyeong Ahn · Jongwoo Ko · Se-Young Yun -
2022 : CUDA: Curriculum of Data Augmentation for Long-tailed Recognition »
Sumyeong Ahn · Jongwoo Ko · Se-Young Yun -
2022 : Mitigating Dataset Bias by Using Per-sample Gradient »
Sumyeong Ahn · SeongYoon Kim · Se-Young Yun -
2023 Poster: Robust Data Pruning under Label Noise via Maximizing Re-labeling Accuracy »
Dongmin Park · Seola Choi · Doyoung Kim · Hwanjun Song · Jae-Gil Lee -
2023 Poster: Enhancing Generalization and Plasticity for Sample Efficient Reinforcement Learning »
Hojoon Lee · Hanseul Cho · HYUNSEUNG KIM · DAEHOON GWAK · Joonkee Kim · Jaegul Choo · Se-Young Yun · Chulhee Yun -
2023 Poster: Fair Streaming Principal Component Analysis: Statistical and Algorithmic Viewpoint »
Junghyun Lee · Hanseul Cho · Se-Young Yun · Chulhee Yun -
2022 Poster: Robust Streaming PCA »
Daniel Bienstock · Minchan Jeong · Apurv Shukla · Se-Young Yun -
2022 Poster: Meta-Query-Net: Resolving Purity-Informativeness Dilemma in Open-set Active Learning »
Dongmin Park · Yooju Shin · Jihwan Bang · Youngjun Lee · Hwanjun Song · Jae-Gil Lee -
2022 Poster: Mutual Information Divergence: A Unified Metric for Multimodal Generative Models »
Jin-Hwa Kim · Yunji Kim · Jiyoung Lee · Kang Min Yoo · Sang-Woo Lee -
2022 Poster: SelecMix: Debiased Learning by Contradicting-pair Sampling »
Inwoo Hwang · Sangjun Lee · Yunhyeok Kwak · Seong Joon Oh · Damien Teney · Jin-Hwa Kim · Byoung-Tak Zhang -
2022 Poster: Preservation of the Global Knowledge by Not-True Distillation in Federated Learning »
Gihun Lee · Minchan Jeong · Yongjin Shin · Sangmin Bae · Se-Young Yun -
2021 Poster: FINE Samples for Learning with Noisy Labels »
Taehyeon Kim · Jongwoo Ko · sangwook Cho · JinHwan Choi · Se-Young Yun -
2020 Poster: Regret in Online Recommendation Systems »
Kaito Ariu · Narae Ryu · Se-Young Yun · Alexandre Proutiere -
2019 Poster: Optimal Sampling and Clustering in the Stochastic Block Model »
Se-Young Yun · Alexandre Proutiere -
2018 Poster: Bilinear Attention Networks »
Jin-Hwa Kim · Jaehyun Jun · Byoung-Tak Zhang -
2017 Poster: Overcoming Catastrophic Forgetting by Incremental Moment Matching »
Sang-Woo Lee · Jin-Hwa Kim · Jaehyun Jun · Jung-Woo Ha · Byoung-Tak Zhang -
2017 Spotlight: Overcoming Catastrophic Forgetting by Incremental Moment Matching »
Sang-Woo Lee · Jin-Hwa Kim · Jaehyun Jun · Jung-Woo Ha · Byoung-Tak Zhang -
2016 Poster: Multimodal Residual Learning for Visual QA »
Jin-Hwa Kim · Sang-Woo Lee · Donghyun Kwak · Min-Oh Heo · Jeonghee Kim · Jung-Woo Ha · Byoung-Tak Zhang -
2016 Poster: Optimal Cluster Recovery in the Labeled Stochastic Block Model »
Se-Young Yun · Alexandre Proutiere -
2015 Poster: Fast and Memory Optimal Low-Rank Matrix Approximation »
Se-Young Yun · marc lelarge · Alexandre Proutiere -
2014 Poster: Streaming, Memory Limited Algorithms for Community Detection »
Se-Young Yun · marc lelarge · Alexandre Proutiere