Timezone: »

 
Poster
ViLBERT: Pretraining Task-Agnostic Visiolinguistic Representations for Vision-and-Language Tasks
Jiasen Lu · Dhruv Batra · Devi Parikh · Stefan Lee

Wed Dec 11 05:00 PM -- 07:00 PM (PST) @ East Exhibition Hall B + C #119

We present ViLBERT (short for Vision-and-Language BERT), a model for learning task-agnostic joint representations of image content and natural language. We extend the popular BERT architecture to a multi-modal two-stream model, processing both visual and textual inputs in separate streams that interact through co-attentional transformer layers. We pretrain our model through two proxy tasks on the large, automatically collected Conceptual Captions dataset and then transfer it to multiple established vision-and-language tasks -- visual question answering, visual commonsense reasoning, referring expressions, and caption-based image retrieval -- by making only minor additions to the base architecture. We observe significant improvements across tasks compared to existing task-specific models -- achieving state-of-the-art on all four tasks. Our work represents a shift away from learning groundings between vision and language only as part of task training and towards treating visual grounding as a pretrainable and transferable capability.

Author Information

Jiasen Lu (Georgia Tech)
Dhruv Batra (Georgia Tech / Facebook AI Research (FAIR))
Devi Parikh (Georgia Tech / Facebook AI Research (FAIR))
Stefan Lee (Oregon State University)

More from the Same Authors