Timezone: »

 
Poster
Interpretable and Globally Optimal Prediction for Textual Grounding using Image Concepts
Raymond A. Yeh · Jinjun Xiong · Wen-Mei Hwu · Minh Do · Alex Schwing

Tue Dec 05 06:30 PM -- 10:30 PM (PST) @ Pacific Ballroom #82

Textual grounding is an important but challenging task for human-computer inter- action, robotics and knowledge mining. Existing algorithms generally formulate the task as selection from a set of bounding box proposals obtained from deep net based systems. In this work, we demonstrate that we can cast the problem of textual grounding into a unified framework that permits efficient search over all possible bounding boxes. Hence, the method is able to consider significantly more proposals and doesn’t rely on a successful first stage hypothesizing bounding box proposals. Beyond, we demonstrate that the trained parameters of our model can be used as word-embeddings which capture spatial-image relationships and provide interpretability. Lastly, at the time of submission, our approach outperformed the current state-of-the-art methods on the Flickr 30k Entities and the ReferItGame dataset by 3.08% and 7.77% respectively.

Author Information

Raymond A. Yeh (University of Illinois at Urbana–Champaign)
Jinjun Xiong (IBM Research)
Wen-Mei Hwu
Minh Do (University of Illinois)
Alex Schwing (University of Illinois at Urbana-Champaign)

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors