Current state-of-the-art vision-and-language models are evaluated on tasks either individually or in a multi-task setting, overlooking the challenges of continually learning (CL) tasks as they arrive. Existing CL benchmarks have facilitated research on task adaptation and mitigating "catastrophic forgetting", but are limited to vision-only and language-only tasks. We present CLiMB, a benchmark to study the challenge of learning multimodal tasks in a CL setting, and to systematically evaluate how upstream continual learning can rapidly generalize to new multimodal and unimodal tasks. CLiMB includes implementations of several CL algorithms and a modified Vision-Language Transformer (ViLT) model that can be deployed on both multimodal and unimodal tasks. We find that common CL methods can help mitigate forgetting during multimodal task learning, but do not enable cross-task knowledge transfer. We envision that CLiMB will facilitate research on a new class of CL algorithms for this challenging multimodal setting.
Tejas Srinivasan (University of Southern California)
I am a second-year Ph.D. student in the GLAMOR Lab at the University of Southern California, working with Prof. Jesse Thomason on language grounding in perception and, more broadly, the world.
Ting-Yun Chang (University of Southern California)
Leticia Pinto Alva (University of Southern California)
Georgios Chochlakis (University of Southern California)
Mohammad Rostami (University of Pennsylvania)
Jesse Thomason (University of Southern California)
More from the Same Authors
2022 : ProgPrompt: Generating Situated Robot Task Plans using Large Language Models »
Ishika Singh · Valts Blukis · Arsalan Mousavian · Ankit Goyal · Danfei Xu · Jonathan Tremblay · Dieter Fox · Jesse Thomason · Animesh Garg
2021 Poster: Lifelong Domain Adaptation via Consolidated Internal Distribution »
2019 : Panel Discussion »
Linda Smith · Josh Tenenbaum · Lisa Anne Hendricks · James McClelland · Timothy Lillicrap · Jesse Thomason · Jason Baldridge · Louis-Philippe Morency
2019 : From Human Language to Agent Action »