Timezone: »
Zero-shot semantic segmentation aims to recognize the semantics of pixels from unseen categories with zero training samples. Previous practice [1] proposed to train the classifiers for unseen categories using the visual features generated from semantic word embeddings. However, the generator is merely learned on the seen categories while no constraint is applied to the unseen categories, leading to poor generalization ability. In this work, we propose a Consistent Structural Relation Learning (CSRL) approach to constrain the generating of unseen visual features by exploiting the structural relations between seen and unseen categories. We observe that different categories are usually with similar relations in either semantic word embedding space or visual feature space. This observation motivates us to harness the similarity of category-level relations on the semantic word embedding space to learn a better visual feature generator. Concretely, by exploring the pair-wise and list-wise structures, we impose the relations of generated visual features to be consistent with their counterparts in the semantic word embedding space. In this way, the relations between seen and unseen categories will be transferred to implicitly constrain the generator to produce relation-consistent unseen visual features. We conduct extensive experiments on Pascal-VOC and Pascal-Context benchmarks. The proposed CSRL significantly outperforms existing state-of-the-art methods by a large margin, resulting in ~7-12% on Pascal-VOC and ~2-5% on Pascal-Context.
Author Information
Peike Li (University of Technology Sydney)
Yunchao Wei (UTS)
Yi Yang (UTS)
Related Events (a corresponding poster, oral, or spotlight)
-
2020 Poster: Consistent Structural Relation Learning for Zero-Shot Segmentation »
Thu. Dec 10th 05:00 -- 07:00 AM Room Poster Session 4 #1147
More from the Same Authors
-
2022 Spotlight: Mask Matching Transformer for Few-Shot Segmentation »
siyu jiao · Gengwei Zhang · Shant Navasardyan · Ling Chen · Yao Zhao · Yunchao Wei · Humphrey Shi -
2022 Poster: Mask Matching Transformer for Few-Shot Segmentation »
siyu jiao · Gengwei Zhang · Shant Navasardyan · Ling Chen · Yao Zhao · Yunchao Wei · Humphrey Shi -
2021 Poster: Few-Shot Segmentation via Cycle-Consistent Transformer »
Gengwei Zhang · Guoliang Kang · Yi Yang · Yunchao Wei -
2021 Poster: Associating Objects with Transformers for Video Object Segmentation »
Zongxin Yang · Yunchao Wei · Yi Yang -
2020 Poster: Adversarial Style Mining for One-Shot Unsupervised Domain Adaptation »
Yawei Luo · Ping Liu · Tao Guan · Junqing Yu · Yi Yang -
2020 Poster: Pixel-Level Cycle Association: A New Perspective for Domain Adaptive Semantic Segmentation »
Guoliang Kang · Yunchao Wei · Yi Yang · Yueting Zhuang · Alexander Hauptmann -
2020 Oral: Pixel-Level Cycle Association: A New Perspective for Domain Adaptive Semantic Segmentation »
Guoliang Kang · Yunchao Wei · Yi Yang · Yueting Zhuang · Alexander Hauptmann -
2019 Poster: Connective Cognition Network for Directional Visual Commonsense Reasoning »
Aming Wu · Linchao Zhu · Yahong Han · Yi Yang -
2019 Poster: Network Pruning via Transformable Architecture Search »
Xuanyi Dong · Yi Yang -
2018 Poster: Self-Erasing Network for Integral Object Attention »
Qibin Hou · PengTao Jiang · Yunchao Wei · Ming-Ming Cheng