Skip to yearly menu bar Skip to main content


Oral Poster

RG-SAN: Rule-Guided Spatial Awareness Network for End-to-End 3D Referring Expression Segmentation

Changli Wu · qi chen · Jiayi Ji · Haowei Wang · Yiwei Ma · You Huang · Gen Luo · Hao Fei · Xiaoshuai Sun · Rongrong Ji

East Exhibit Hall A-C #3208
[ ] [ Project Page ]
Wed 11 Dec 11 a.m. PST — 2 p.m. PST
 
Oral presentation: Oral Session 1B: Human-AI Interaction
Wed 11 Dec 10 a.m. PST — 11 a.m. PST

Abstract:

3D Referring Expression Segmentation (3D-RES) aims to segment 3D objects by correlating referring expressions with point clouds. However, traditional approaches frequently encounter issues like over-segmentation or mis-segmentation, due to insufficient emphasis on spatial information of instances. In this paper, we introduce a Rule-Guided Spatial Awareness Network (RG-SAN) by utilizing solely the spatial information of the target instance for supervision. This approach enables the network to accurately depict the spatial relationships among all entities described in the text, thus enhancing the reasoning capabilities. The RG-SAN consists of the Text-driven Localization Module (TLM) and the Rule-guided Weak Supervision (RWS) strategy. The TLM initially locates all mentioned instances and iteratively refines their positional information. The RWS strategy, acknowledging that only target objects have supervised positional information, employs dependency tree rules to precisely guide the core instance’s positioning. Extensive testing on the ScanRefer benchmark has shown that RG-SAN not only establishes new performance benchmarks, with an mIoU increase of 5.1 points, but also exhibits significant improvements in robustness when processing descriptions with spatial ambiguity. All codes are available at https://github.com/sosppxo/RG-SAN.

Live content is unavailable. Log in and register to view live content