Skip to yearly menu bar Skip to main content


[1st] Oral Presentation
in
Workshop: Vision Transformers: Theory and applications

End-to-end Multimodal Representation Learning for Video Dialog

Huda Alamri · Apoorva Beedu · Irfan Essa · Anthony Bilic · Michael Hu


Abstract:

Video-based dialog task is a challenging multimodal learning task that has received increasing attention over the past few years with state-of-the-art obtaining new performance records. This progress is largely powered by the adaptation of the more powerful transformer-based language encoders. Despite this progress, existing approaches do not effectively utilize visual features to help solve tasks. Recent studies show that state-of-the-art models are biased towards textual information rather than visual cues. In order to better leverage the available visual information, this study proposes a new framework that combines 3D-CNN network and transformer-based networks into a single visual encoder to extract more robust semantic representations from videos. The visual encoder is jointly trained end-to-end with other input modalities such as text and audio. Experiments on the AVSD task show significant improvement over baselines in both generative and retrieval tasks.

Chat is not available.