Skip to yearly menu bar Skip to main content


Training and Evaluating Multimodal Word Embeddings with Large-scale Web Annotated Images

Junhua Mao · Jiajing Xu · Kevin Jing · Alan Yuille

Area 5+6+7+8 #115

Keywords: [ (Application) Natural Language and Text Processing ] [ Large Scale Learning and Big Data ] [ (Application) Computer Vision ] [ Deep Learning or Neural Networks ]


In this paper, we focus on training and evaluating effective word embeddings with both text and visual information. More specifically, we introduce a large-scale dataset with 300 million sentences describing over 40 million images crawled and downloaded from publicly available Pins (i.e. an image with sentence descriptions uploaded by users) on Pinterest. This dataset is more than 200 times larger than MS COCO, the standard large-scale image dataset with sentence descriptions. In addition, we construct an evaluation dataset to directly assess the effectiveness of word embeddings in terms of finding semantically similar or related words and phrases. The word/phrase pairs in this evaluation dataset are collected from the click data with millions of users in an image search system, thus contain rich semantic relationships. Based on these datasets, we propose and compare several Recurrent Neural Networks (RNNs) based multimodal (text and image) models. Experiments show that our model benefits from incorporating the visual information into the word embeddings, and a weight sharing strategy is crucial for learning such multimodal embeddings. The project page is: (The datasets introduced in this work will be gradually released on the project page.).

Live content is unavailable. Log in and register to view live content