We propose a few-shot learning method for detecting out-of-distribution (OOD) samples from classes that are unseen during training while classifying samples from seen classes using only a few labeled examples. For detecting unseen classes while generalizing to new samples of known classes, we synthesize fake samples, i.e., OOD samples, but that resemble in-distribution samples, and use them along with real samples. Our approach is based on an extension of model-agnostic meta learning (MAML) and is denoted as OOD-MAML, which not only learns a model initialization but also the initial fake samples across tasks. The learned initial fake samples can be used to quickly adapt to new tasks to form task-specific fake samples with only one or a few gradient update steps using MAML. For testing, OOD-MAML converts a K-shot N-way classification task into N sub-tasks of K-shot OOD detection with respect to each class. The joint analysis of N sub-tasks facilitates simultaneous classification and OOD detection and, furthermore, offers an advantage, in that it does not require re-training when the number of classes for a test task differs from that for training tasks; it is sufficient to simply assume as many sub-tasks as the number of classes for the test task. We also demonstrate the effective performance of OOD-MAML over benchmark datasets.
Taewon Jeong (KAIST)
Heeyoung Kim (KAIST)
More from the Same Authors
2021 Poster: ABC: Auxiliary Balanced Classifier for Class-imbalanced Semi-supervised Learning »
Hyuck Lee · Seungjae Shin · Heeyoung Kim
2021 Poster: Locally Most Powerful Bayesian Test for Out-of-Distribution Detection using Deep Generative Models »
Keunseo Kim · JunCheol Shin · Heeyoung Kim