Skip to yearly menu bar Skip to main content


Spotlight Poster

AIMS: All-Inclusive Multi-Level Segmentation for Anything

Lu Qi · Jason Kuen · Weidong Guo · Jiuxiang Gu · Zhe Lin · Bo Du · Yu Xu · Ming-Hsuan Yang

Great Hall & Hall B1+B2 (level 1) #2000

Abstract:

Despite the progress of image segmentation for accurate visual entity segmentation, completing the diverse requirements of image editing applications for different-level region-of-interest selections remains unsolved. In this paper, we propose a new task, All-Inclusive Multi-Level Segmentation (AIMS), which segments visual regions into three levels: part, entity, and relation (two entities with some semantic relationships). We also build a unified AIMS model through multi-dataset multi-task training to address the two major challenges of annotation inconsistency and task correlation. Specifically, we propose task complementarity, association, and prompt mask encoder for three-level predictions. Extensive experiments demonstrate the effectiveness and generalization capacity of our method compared to other state-of-the-art methods on a single dataset or the concurrent work on segment anything. We will make our code and training model publicly available.

Chat is not available.