Timezone: »
State-of-the-art approaches for semantic segmentation rely on deep convolutional neural networks trained on fully annotated datasets, that have been shown to be notoriously expensive to collect, both in terms of time and money. To remedy this situation, weakly supervised methods leverage other forms of supervision that require substantially less annotation effort, but they typically present an inability to predict precise object boundaries due to approximate nature of the supervisory signals in those regions. While great progress has been made in improving the performance, many of these weakly supervised methods are highly tailored to their own specific settings. This raises challenges in reusing algorithms and making steady progress. In this paper, we intentionally avoid such practices when tackling weakly supervised semantic segmentation. In particular, we train standard neural networks with partial cross-entropy loss function for the labeled pixels and our proposed Gated CRF loss for the unlabeled pixels. The Gated CRF loss is designed to deliver several important assets: 1) it enables flexibility in the kernel construction to mask out influence from undesired pixel locations; 2) it offloads learning contextual relations to CNN and concentrates on semantic boundaries; 3) it does not rely on high-dimensional filtering and thus has a simple implementation. Throughout the paper we present the advantages of the loss function, analyze several aspects of weakly supervised training, and show that our `purist' approach achieves state-of-the-art performance for both click-based and scribble-based annotations.
Author Information
Anton Obukhov (ETH Zurich)
Stamatios Georgoulis (ETH Zurich)
Dengxin Dai (ETH Zurich)
Luc V Gool (Computer Vision Lab, ETH Zurich)
More from the Same Authors
-
2021 : Spatial-Temporal Gated Transformersfor Efficient Video Processing »
Yawei Li · Babak Ehteshami Bejnordi · Bert Moons · Tijmen Blankevoort · Amirhossein Habibian · Radu Timofte · Luc V Gool -
2022 Poster: Recurrent Video Restoration Transformer with Guided Deformable Attention »
Jingyun Liang · Yuchen Fan · Xiaoyu Xiang · Rakesh Ranjan · Eddy Ilg · Simon Green · Jiezhang Cao · Kai Zhang · Radu Timofte · Luc V Gool -
2022 Spotlight: Lightning Talks 5A-4 »
Yangrui Chen · Zhiyang Chen · Liang Zhang · Hanqing Wang · Jiaqi Han · Shuchen Wu · shaohui peng · Ganqu Cui · Yoav Kolumbus · Noemi Elteto · Xing Hu · Anwen Hu · Wei Liang · Cong Xie · Lifan Yuan · Noam Nisan · Wenbing Huang · Yousong Zhu · Ishita Dasgupta · Luc V Gool · Tingyang Xu · Rui Zhang · Qin Jin · Zhaowen Li · Meng Ma · Bingxiang He · Yangyi Chen · Juncheng Gu · Wenguan Wang · Ke Tang · Yu Rong · Eric Schulz · Fan Yang · Wei Li · Zhiyuan Liu · Jiaming Guo · Yanghua Peng · Haibin Lin · Haixin Wang · Qi Yi · Maosong Sun · Ruizhi Chen · Chuan Wu · Chaoyang Zhao · Yibo Zhu · Liwei Wu · xishan zhang · Zidong Du · Rui Zhao · Jinqiao Wang · Ling Li · Qi Guo · Ming Tang · Yunji Chen -
2022 Spotlight: Towards Versatile Embodied Navigation »
Hanqing Wang · Wei Liang · Luc V Gool · Wenguan Wang -
2022 Spotlight: Recurrent Video Restoration Transformer with Guided Deformable Attention »
Jingyun Liang · Yuchen Fan · Xiaoyu Xiang · Rakesh Ranjan · Eddy Ilg · Simon Green · Jiezhang Cao · Kai Zhang · Radu Timofte · Luc V Gool -
2022 Poster: I2DFormer: Learning Image to Document Attention for Zero-Shot Image Classification »
Muhammad Ferjad Naeem · Yongqin Xian · Luc V Gool · Federico Tombari -
2022 Poster: Degradation-Aware Unfolding Half-Shuffle Transformer for Spectral Compressive Imaging »
Yuanhao Cai · Jing Lin · Haoqian Wang · Xin Yuan · Henghui Ding · Yulun Zhang · Radu Timofte · Luc V Gool -
2022 Poster: Towards Versatile Embodied Navigation »
Hanqing Wang · Wei Liang · Luc V Gool · Wenguan Wang -
2021 Poster: Revisiting Contrastive Methods for Unsupervised Learning of Visual Representations »
Wouter Van Gansbeke · Simon Vandenhende · Stamatios Georgoulis · Luc V Gool -
2020 Poster: GOCor: Bringing Globally Optimized Correspondence Volumes into Your Neural Network »
Prune Truong · Martin Danelljan · Luc V Gool · Radu Timofte -
2020 Poster: Soft Contrastive Learning for Visual Localization »
Janine Thoma · Danda Pani Paudel · Luc V Gool -
2017 Poster: Soft-to-Hard Vector Quantization for End-to-End Learning Compressible Representations »
Eirikur Agustsson · Fabian Mentzer · Michael Tschannen · Lukas Cavigelli · Radu Timofte · Luca Benini · Luc V Gool -
2016 Poster: Dynamic Filter Networks »
Xu Jia · Bert De Brabandere · Tinne Tuytelaars · Luc V Gool -
2014 Poster: Quantized Kernel Learning for Feature Matching »
Danfeng Qin · Xuanli Chen · Matthieu Guillaumin · Luc V Gool -
2014 Poster: Self-Adaptable Templates for Feature Coding »
Xavier Boix · Gemma Roig · Salomon Diether · Luc V Gool -
2011 Poster: Learning Probabilistic Non-Linear Latent Variable Models for Tracking Complex Activities »
Angela Yao · Juergen Gall · Luc V Gool · Raquel Urtasun