Timezone: »
Pre-training video transformers on extra large-scale datasets is generally required to achieve premier performance on relatively small datasets. In this paper, we show that video masked autoencoders (VideoMAE) are data-efficient learners for self-supervised video pre-training (SSVP). We are inspired by the recent ImageMAE and propose customized video tube masking with an extremely high ratio. This simple design makes video reconstruction a more challenging and meaningful self-supervision task, thus encouraging extracting more effective video representations during the pre-training process. We obtain three important findings with VideoMAE: (1) An extremely high proportion of masking ratio (i.e., 90% to 95%) still yields favorable performance for VideoMAE. The temporally redundant video content enables higher masking ratio than that of images. (2) VideoMAE achieves impressive results on very small datasets (i.e., around 3k-4k videos) without using any extra data. This is partially ascribed to the challenging task of video reconstruction to enforce high-level structure learning. (3) VideoMAE shows that data quality is more important than data quantity for SSVP. Domain shift between pre-training and target datasets is an important factor. Notably, our VideoMAE with the vanilla ViT backbone can achieve 87.4% on Kinects-400, 75.4% on Something-Something V2, 91.3% on UCF101, and 62.6% on HMDB51, without using any extra data. Code is available at https://github.com/MCG-NJU/VideoMAE.
Author Information
Zhan Tong (Tencent AI Lab)
Yibing Song (Tencent AI Lab)
Jue Wang (Tencent AI Lab)
Limin Wang (Nanjing University)
More from the Same Authors
-
2022 Poster: PointTAD: Multi-Label Temporal Action Detection with Learnable Query Points »
Jing Tan · Xiaotong Zhao · Xintian Shi · Bin Kang · Limin Wang -
2022 Spotlight: One Model to Edit Them All: Free-Form Text-Driven Image Manipulation with Semantic Modulations »
Yiming Zhu · Hongyu Liu · Yibing Song · Ziyang Yuan · Xintong Han · Chun Yuan · Qifeng Chen · Jue Wang -
2022 Spotlight: Stability Analysis and Generalization Bounds of Adversarial Training »
Jiancong Xiao · Yanbo Fan · Ruoyu Sun · Jue Wang · Zhi-Quan Luo -
2022 Spotlight: Lightning Talks 6B-1 »
Yushun Zhang · Duc Nguyen · Jiancong Xiao · Wei Jiang · Yaohua Wang · Yilun Xu · Zhen LI · Anderson Ye Zhang · Ziming Liu · Fangyi Zhang · Gilles Stoltz · Congliang Chen · Gang Li · Yanbo Fan · Ruoyu Sun · Naichen Shi · Yibo Wang · Ming Lin · Max Tegmark · Lijun Zhang · Jue Wang · Ruoyu Sun · Tommi Jaakkola · Senzhang Wang · Zhi-Quan Luo · Xiuyu Sun · Zhi-Quan Luo · Tianbao Yang · Rong Jin -
2022 Spotlight: VideoMAE: Masked Autoencoders are Data-Efficient Learners for Self-Supervised Video Pre-Training »
Zhan Tong · Yibing Song · Jue Wang · Limin Wang -
2022 Spotlight: Lightning Talks 5B-1 »
Devansh Arpit · Xiaojun Xu · Zifan Shi · Ivan Skorokhodov · Shayan Shekarforoush · Zhan Tong · Yiqun Wang · Shichong Peng · Linyi Li · Ivan Skorokhodov · Huan Wang · Yibing Song · David Lindell · Yinghao Xu · Seyed Alireza Moazenipourasil · Sergey Tulyakov · Peter Wonka · Yiqun Wang · Ke Li · David Fleet · Yujun Shen · Yingbo Zhou · Bo Li · Jue Wang · Peter Wonka · Marcus Brubaker · Caiming Xiong · Limin Wang · Deli Zhao · Qifeng Chen · Dit-Yan Yeung -
2022 Spotlight: Lightning Talks 3A-3 »
Xu Yan · Zheng Dong · Qiancheng Fu · Jing Tan · Hezhen Hu · Fukun Yin · Weilun Wang · Ke Xu · Heshen Zhan · Wen Liu · Qingshan Xu · Xiaotong Zhao · Chaoda Zheng · Ziheng Duan · Zilong Huang · Xintian Shi · Wengang Zhou · Yew Soon Ong · Pei Cheng · Hujun Bao · Houqiang Li · Wenbing Tao · Jiantao Gao · Bin Kang · Weiwei Xu · Limin Wang · Ruimao Zhang · Tao Chen · Gang Yu · Rynson Lau · Shuguang Cui · Zhen Li -
2022 Spotlight: PointTAD: Multi-Label Temporal Action Detection with Learnable Query Points »
Jing Tan · Xiaotong Zhao · Xintian Shi · Bin Kang · Limin Wang -
2022 Poster: Boosting the Transferability of Adversarial Attacks with Reverse Adversarial Perturbation »
Zeyu Qin · Yanbo Fan · Yi Liu · Li Shen · Yong Zhang · Jue Wang · Baoyuan Wu -
2022 Poster: One Model to Edit Them All: Free-Form Text-Driven Image Manipulation with Semantic Modulations »
Yiming Zhu · Hongyu Liu · Yibing Song · Ziyang Yuan · Xintong Han · Chun Yuan · Qifeng Chen · Jue Wang -
2022 Poster: OST: Improving Generalization of DeepFake Detection via One-Shot Test-Time Training »
Liang Chen · Yong Zhang · Yibing Song · Jue Wang · Lingqiao Liu -
2022 Poster: Stability Analysis and Generalization Bounds of Adversarial Training »
Jiancong Xiao · Yanbo Fan · Ruoyu Sun · Jue Wang · Zhi-Quan Luo -
2022 Poster: AdaptFormer: Adapting Vision Transformers for Scalable Visual Recognition »
Shoufa Chen · Chongjian GE · Zhan Tong · Jiangliu Wang · Yibing Song · Jue Wang · Ping Luo -
2021 Poster: Revitalizing CNN Attention via Transformers in Self-Supervised Visual Representation Learning »
Chongjian GE · Youwei Liang · YIBING SONG · Jianbo Jiao · Jue Wang · Ping Luo