Skip to yearly menu bar Skip to main content


Poster

Deep Fragment Embeddings for Bidirectional Image Sentence Mapping

Andrej Karpathy · Armand Joulin · Li Fei-Fei

Level 2, room 210D

Abstract:

We introduce a model for bidirectional retrieval of images and sentences through a deep, multi-modal embedding of visual and natural language data. Unlike previous models that directly map images or sentences into a common embedding space, our model works on a finer level and embeds fragments of images (objects) and fragments of sentences (typed dependency tree relations) into a common space. We then introduce a structured max-margin objective that allows our model to explicitly associate these fragments across modalities. Extensive experimental evaluation shows that reasoning on both the global level of images and sentences and the finer level of their respective fragments improves performance on image-sentence retrieval tasks. Additionally, our model provides interpretable predictions for the image-sentence retrieval task since the inferred inter-modal alignment of fragments is explicit.

Live content is unavailable. Log in and register to view live content