Space-Time Correspondence as a Contrastive Random Walk
Allan Jabri, Andrew Owens, Alexei Efros
Oral presentation: Orals & Spotlights Track 12: Vision Applications
on 2020-12-08T18:00:00-08:00 - 2020-12-08T18:15:00-08:00
on 2020-12-08T18:00:00-08:00 - 2020-12-08T18:15:00-08:00
Toggle Abstract Paper (in Proceedings / .pdf)
Abstract: This paper proposes a simple self-supervised approach for learning a representation for visual correspondence from raw video. We cast correspondence as prediction of links in a space-time graph constructed from video. In this graph, the nodes are patches sampled from each frame, and nodes adjacent in time can share a directed edge. We learn a representation in which pairwise similarity defines transition probability of a random walk, such that prediction of long-range correspondence is computed as a walk along the graph. We optimize the representation to place high probability along paths of similarity. Targets for learning are formed without supervision, by cycle-consistency: the objective is to maximize the likelihood of returning to the initial node when walking along a graph constructed from a palindrome of frames. Thus, a single path-level constraint implicitly supervises chains of intermediate comparisons. When used as a similarity metric without adaptation, the learned representation outperforms the self-supervised state-of-the-art on label propagation tasks involving objects, semantic parts, and pose. Moreover, we demonstrate that a technique we call edge dropout, as well as self-supervised adaptation at test-time, further improve transfer for object-centric correspondence.