Timezone: »

 
Spotlight
Visual Concepts Tokenization
Tao Yang · Yuwang Wang · Yan Lu · Nanning Zheng

Thu Dec 08 09:00 AM -- 11:00 AM (PST) @

Obtaining the human-like perception ability of abstracting visual concepts from concrete pixels has always been a fundamental and important target in machine learning research fields such as disentangled representation learning and scene decomposition. Towards this goal, we propose an unsupervised transformer-based Visual Concepts Tokenization framework, dubbed VCT, to perceive an image into a set of disentangled visual concept tokens, with each concept token responding to one type of independent visual concept. Particularly, to obtain these concept tokens, we only use cross-attention to extract visual information from the image tokens layer by layer without self-attention between concept tokens, preventing information leakage across concept tokens. We further propose a Concept Disentangling Loss to facilitate that different concept tokens represent independent visual concepts. The cross-attention and disentangling loss play the role of induction and mutual exclusion for the concept tokens, respectively. Extensive experiments on several popular datasets verify the effectiveness of VCT on the tasks of disentangled representation learning and scene decomposition. VCT achieves the state of the art results by a large margin.

Author Information

Tao Yang (Xi'an Jiaotong University)
Yuwang Wang (Microsoft)
Yan Lu (Microsoft Research Asia)
Nanning Zheng (Xi'an Jiaotong University)

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors