Timezone: »
Poster
Mask Matching Transformer for Few-Shot Segmentation
siyu jiao · Gengwei Zhang · Shant Navasardyan · Ling Chen · Yao Zhao · Yunchao Wei · Humphrey Shi
In this paper, we aim to tackle the challenging few-shot segmentation task from a new perspective. Typical methods follow the paradigm to firstly learn prototypical features from support images and then match query features in pixel-level to obtain segmentation results. However, to obtain satisfactory segments, such a paradigm needs to couple the learning of the matching operations with heavy segmentation modules, limiting the flexibility of design and increasing the learning complexity. To alleviate this issue, we propose Mask Matching Transformer (MM-Former), a new paradigm for the few-shot segmentation task. Specifically, MM-Former first uses a class-agnostic segmenter to decompose the query image into multiple segment proposals. Then, a simple matching mechanism is applied to merge the related segment proposals into the final mask guided by the support images. The advantages of our MM-Former are two-fold. First, the MM-Former follows the paradigm of 'decompose first and then blend', allowing our method to benefit from the advanced potential objects segmenter to produce high-quality mask proposals for query images. Second, the mission of prototypical features is relaxed to learn coefficients to fuse correct ones within a proposal pool, making the MM-Former be well generalized to complex scenarios or cases. We conduct extensive experiments on the popular COCO-$20^i$ and Pascal-$5^i$ benchmarks. Competitive results well demonstrate the effectiveness and the generalization ability of our MM-Former. Code is available at https://github.com/Picsart-AI-Research/Mask-Matching-Transformer.
Author Information
siyu jiao (Beijing Jiaotong University)
Gengwei Zhang (Sun Yat-sen University)
Shant Navasardyan (Picsart AI Research (PAIR))
Ling Chen (" University of Technology, Sydney, Australia")
Yao Zhao (Beijing Jiaotong University)
Yunchao Wei (UTS)
Humphrey Shi (UIUC)
More from the Same Authors
-
2022 Spotlight: Mask Matching Transformer for Few-Shot Segmentation »
siyu jiao · Gengwei Zhang · Shant Navasardyan · Ling Chen · Yao Zhao · Yunchao Wei · Humphrey Shi -
2021 Poster: Few-Shot Segmentation via Cycle-Consistent Transformer »
Gengwei Zhang · Guoliang Kang · Yi Yang · Yunchao Wei -
2021 Poster: Associating Objects with Transformers for Video Object Segmentation »
Zongxin Yang · Yunchao Wei · Yi Yang -
2020 Poster: Consistent Structural Relation Learning for Zero-Shot Segmentation »
Peike Li · Yunchao Wei · Yi Yang -
2020 Spotlight: Consistent Structural Relation Learning for Zero-Shot Segmentation »
Peike Li · Yunchao Wei · Yi Yang -
2020 Poster: Pixel-Level Cycle Association: A New Perspective for Domain Adaptive Semantic Segmentation »
Guoliang Kang · Yunchao Wei · Yi Yang · Yueting Zhuang · Alexander Hauptmann -
2020 Poster: Auto-Panoptic: Cooperative Multi-Component Architecture Search for Panoptic Segmentation »
Yangxin Wu · Gengwei Zhang · Hang Xu · Xiaodan Liang · Liang Lin -
2020 Poster: Deep Reinforcement Learning with Stacked Hierarchical Attention for Text-based Games »
Yunqiu Xu · Meng Fang · Ling Chen · Yali Du · Joey Tianyi Zhou · Chengqi Zhang -
2020 Oral: Pixel-Level Cycle Association: A New Perspective for Domain Adaptive Semantic Segmentation »
Guoliang Kang · Yunchao Wei · Yi Yang · Yueting Zhuang · Alexander Hauptmann -
2020 Poster: CoADNet: Collaborative Aggregation-and-Distribution Networks for Co-Salient Object Detection »
Qijian Zhang · Runmin Cong · Junhui Hou · Chongyi Li · Yao Zhao -
2019 Poster: Scalable Deep Generative Relational Model with High-Order Node Dependence »
Xuhui Fan · Bin Li · Caoyuan Li · Scott SIsson · Ling Chen -
2018 Poster: Self-Erasing Network for Integral Object Attention »
Qibin Hou · PengTao Jiang · Yunchao Wei · Ming-Ming Cheng