Timezone: »
Transformers have become one of the dominant architectures in the field of computer vision. However, there are yet several challenges when applying such architectures to video data. Most notably, these models struggle to model the temporal patterns of video data effectively. Directly targeting this issue, we introduce PatchBlender, a learnable blending function that operates over patch embeddings across the temporal dimension of the latent space. We show that our method is successful at enabling vision transformers to encode the temporal component of video data. On Something-Something v2 and MOVi-A, we show that our method improves the performance of a ViT-B. PatchBlender has the advantage of being compatible with almost any Transformer architecture and since it is learnable, the model can adaptively turn on or off the prior. It is also extremely lightweight compute-wise, 0.005% the GFLOPs of a ViT-B.
Author Information
Gabriele Prato (Mila, Université de Montréal)
Yale Song (Facebook AI Research)
Janarthanan Rajendran (Mila)
R Devon Hjelm (Microsoft Research)
Neel Joshi (Microsoft Research)
Sarath Chandar
More from the Same Authors
-
2023 Poster: An Empirical Investigation of the Role of Pre-training in Lifelong Learning »
Sanket Vaibhav Mehta · Darshan Patil · Sarath Chandar · Emma Strubell -
2023 : Keynote Talk 3 »
Sarath Chandar -
2020 : Contributed Talk: Maximum Reward Formulation In Reinforcement Learning »
Vijaya Sai Krishna Gottipati · Yashaswi Pathak · Rohan Nuttall · Sahir . · Raviteja Chunduru · Ahmed Touati · Sriram Ganapathi · Matthew Taylor · Sarath Chandar -
2019 : Poster Session 2 »
Gabriele Prato · Urmish Thakker · Laura Galindez Olascoaga · Tianyu Zhang · Vahid Partovi Nia · Kamil Adamczewski