Skip to yearly menu bar Skip to main content


Oral Poster

Siamese Masked Autoencoders

Agrim Gupta · Jiajun Wu · Jia Deng · Fei-Fei Li

Great Hall & Hall B1+B2 (level 1) #923
[ ]
Thu 14 Dec 3 p.m. PST — 5 p.m. PST
 
Oral presentation: Oral 6C Vision
Thu 14 Dec 1:20 p.m. PST — 2:20 p.m. PST

Abstract:

Establishing correspondence between images or scenes is a significant challenge in computer vision, especially given occlusions, viewpoint changes, and varying object appearances. In this paper, we present Siamese Masked Autoencoders (SiamMAE), a simple extension of Masked Autoencoders (MAE) for learning visual correspondence from videos. SiamMAE operates on pairs of randomly sampled video frames and asymmetrically masks them. These frames are processed independently by an encoder network, and a decoder composed of a sequence of cross-attention layers is tasked with predicting the missing patches in the future frame. By masking a large fraction (95%) of patches in the future frame while leaving the past frame unchanged, SiamMAE encourages the network to focus on object motion and learn object-centric representations. Despite its conceptual simplicity, features learned via SiamMAE outperform state-of-the-art self-supervised methods on video object segmentation, pose keypoint propagation, and semantic part propagation tasks. SiamMAE achieves competitive results without relying on data augmentation, handcrafted tracking-based pretext tasks, or other techniques to prevent representational collapse.

Chat is not available.