Poster
A Surprisingly Simple Approach to Generalized Few-Shot Semantic Segmentation
Tomoya Sakai · Haoxiang Qiu · Takayuki Katsuki · Daiki Kimura · Takayuki Osogami · Tadanobu Inoue
East Exhibit Hall A-C #1810
[
Abstract
]
Wed 11 Dec 11 a.m. PST
— 2 p.m. PST
Abstract:
The goal of *generalized* few-shot semantic segmentation (GFSS) is to recognize *novel-class* objects through training with a few annotated examples and the *base-class* model that learned the knowledge about base classes.Unlike the classic few-shot semantic segmentation, GFSS aims to classify pixels into both base and novel classes, meaning that GFSS is a more practical setting.To this end, the existing methods rely on several techniques, such as carefully customized models, various combinations of loss functions, and transductive learning.However, we found that a simple rule and standard supervised learning substantially improve performances.In this paper, we propose a simple yet effective method for GFSS that does not employ the techniques mentioned earlier in the existing methods.Also, we theoretically show that our method perfectly maintains the segmentation performance of the base-class model over most of the base classes.Through numerical experiments, we demonstrate the effectiveness of the proposed method.In particular, our method improves the novel-class segmentation performances in the 1-shot scenario by 6.1\% on PASCAL-$5^i$, 4.7\% on PASCAL-$10^i$, and 1.0\% on COCO-$20^i$.
Live content is unavailable. Log in and register to view live content