`

Timezone: »

 
Poster
Learning Debiased and Disentangled Representations for Semantic Segmentation
Sanghyeok Chu · Dongwan Kim · Bohyung Han

Wed Dec 08 12:30 AM -- 02:00 AM (PST) @ None #None

Deep neural networks are susceptible to learn biased models with entangled feature representations, which may lead to subpar performances on various downstream tasks. This is particularly true for under-represented classes, where a lack of diversity in the data exacerbates the tendency. This limitation has been addressed mostly in classification tasks, but there is little study on additional challenges that may appear in more complex dense prediction problems including semantic segmentation. To this end, we propose a model-agnostic and stochastic training scheme for semantic segmentation, which facilitates the learning of debiased and disentangled representations. For each class, we first extract class-specific information from the highly entangled feature map. Then, information related to a randomly sampled class is suppressed by a feature selection process in the feature space. By randomly eliminating certain class information in each training iteration, we effectively reduce feature dependencies among classes, and the model is able to learn more debiased and disentangled feature representations. Models trained with our approach demonstrate strong results on multiple semantic segmentation benchmarks, with especially notable performance gains on under-represented classes.

Author Information

Sanghyeok Chu (Seoul National University)
Dongwan Kim (Seoul National University)
Bohyung Han (Seoul National University)

More from the Same Authors