Timezone: »

TVLT: Textless Vision-Language Transformer
Zineng Tang · Jaemin Cho · Yixin Nie · Mohit Bansal

Thu Dec 01 09:00 AM -- 11:00 AM (PST) @ Hall J #214

In this work, we present the Textless Vision-Language Transformer (TVLT), where homogeneous transformer blocks take raw visual and audio inputs for vision-and-language representation learning with minimal modality-specific design, and do not use text-specific modules such as tokenization or automatic speech recognition (ASR). TVLT is trained by reconstructing masked patches of continuous video frames and audio spectrograms (masked autoencoding) and contrastive modeling to align video and audio. TVLT attains performance comparable to its text-based counterpart on various multimodal tasks, such as visual question answering, image retrieval, video retrieval, and multimodal sentiment analysis, with 28x faster inference speed and only 1/3 of the parameters. Our findings suggest the possibility of learning compact and efficient visual-linguistic representations from low-level visual and audio signals without assuming the prior existence of text. Our code and checkpoints are available at: https://github.com/zinengtang/TVLT

Author Information

Zineng Tang (University of North Carolina, Chapel Hill)
Jaemin Cho (University of North Carolina, Chapel Hill)
Yixin Nie (Meta AI)
Mohit Bansal (UNC Chapel Hill)

More from the Same Authors