Timezone: »
Few-shot segmentation aims to train a segmentation model that can fast adapt to novel classes with few exemplars. The conventional training paradigm is to learn to make predictions on query images conditioned on the features from support images. Previous methods only utilized the semantic-level prototypes of support images as the conditional information. These methods cannot utilize all pixel-wise support information for the query predictions, which is however critical for the segmentation task. In this paper, we focus on utilizing pixel-wise relationships between support and target images to facilitate the few-shot semantic segmentation task. We design a novel Cycle-Consistent Transformer (CyCTR) module to aggregate pixel-wise support features into query ones. CyCTR performs cross-attention between features from different images, i.e. support and query images. We observe that there may exist unexpected irrelevant pixel-level support features. Directly performing cross-attention may aggregate these features from support to query and bias the query features. Thus, we propose using a novel cycle-consistent attention mechanism to filter out possible harmful support features and encourage query features to attend to the most informative pixels from support images. Experiments on all few-shot segmentation benchmarks demonstrate that our proposed CyCTR leads to remarkable improvement compared to previous state-of-the-art methods. Specifically, on Pascal-5^i and COCO-20^i datasets, we achieve 66.6% and 45.6% mIoU for 5-shot segmentation, outperforming previous state-of-the-art by 4.6% and 7.1% respectively.
Author Information
Gengwei Zhang (Sun Yat-sen University)
Guoliang Kang (Carnegie Mellon University)
Yi Yang (UTS)
Yunchao Wei (UTS)
More from the Same Authors
-
2022 Spotlight: Mask Matching Transformer for Few-Shot Segmentation »
siyu jiao · Gengwei Zhang · Shant Navasardyan · Ling Chen · Yao Zhao · Yunchao Wei · Humphrey Shi -
2022 Poster: Mask Matching Transformer for Few-Shot Segmentation »
siyu jiao · Gengwei Zhang · Shant Navasardyan · Ling Chen · Yao Zhao · Yunchao Wei · Humphrey Shi -
2021 Poster: Associating Objects with Transformers for Video Object Segmentation »
Zongxin Yang · Yunchao Wei · Yi Yang -
2020 Poster: Consistent Structural Relation Learning for Zero-Shot Segmentation »
Peike Li · Yunchao Wei · Yi Yang -
2020 Spotlight: Consistent Structural Relation Learning for Zero-Shot Segmentation »
Peike Li · Yunchao Wei · Yi Yang -
2020 Poster: Adversarial Style Mining for One-Shot Unsupervised Domain Adaptation »
Yawei Luo · Ping Liu · Tao Guan · Junqing Yu · Yi Yang -
2020 Poster: Pixel-Level Cycle Association: A New Perspective for Domain Adaptive Semantic Segmentation »
Guoliang Kang · Yunchao Wei · Yi Yang · Yueting Zhuang · Alexander Hauptmann -
2020 Poster: Auto-Panoptic: Cooperative Multi-Component Architecture Search for Panoptic Segmentation »
Yangxin Wu · Gengwei Zhang · Hang Xu · Xiaodan Liang · Liang Lin -
2020 Oral: Pixel-Level Cycle Association: A New Perspective for Domain Adaptive Semantic Segmentation »
Guoliang Kang · Yunchao Wei · Yi Yang · Yueting Zhuang · Alexander Hauptmann -
2019 Poster: Connective Cognition Network for Directional Visual Commonsense Reasoning »
Aming Wu · Linchao Zhu · Yahong Han · Yi Yang -
2019 Poster: Network Pruning via Transformable Architecture Search »
Xuanyi Dong · Yi Yang -
2018 Poster: Self-Erasing Network for Integral Object Attention »
Qibin Hou · PengTao Jiang · Yunchao Wei · Ming-Ming Cheng