Timezone: »
We present recurrent transformer networks (RTNs) for obtaining dense correspondences between semantically similar images. Our networks accomplish this through an iterative process of estimating spatial transformations between the input images and using these transformations to generate aligned convolutional activations. By directly estimating the transformations between an image pair, rather than employing spatial transformer networks to independently normalize each individual image, we show that greater accuracy can be achieved. This process is conducted in a recursive manner to refine both the transformation estimates and the feature representations. In addition, a technique is presented for weakly-supervised training of RTNs that is based on a proposed classification loss. With RTNs, state-of-the-art performance is attained on several benchmarks for semantic correspondence.
Author Information
Seungryong Kim (Yonsei University)
Stephen Lin (Microsoft Research)
Sangryul Jeon (Yonsei University)
Dongbo Min (Ewha Womans University)
Kwanghoon Sohn (Yonsei Univ.)
Related Events (a corresponding poster, oral, or spotlight)
-
2018 Spotlight: Recurrent Transformer Networks for Semantic Correspondence »
Wed. Dec 5th 03:20 -- 03:25 PM Room Room 220 E
More from the Same Authors
-
2020 : Paper 62: Instance-wise Depth and Motion Learning from Monocular Videos »
Seokju Lee · Sunghoon Im · Stephen Lin · In So Kweon -
2021 Spotlight: Aligning Pretraining for Detection via Object-Level Contrastive Learning »
Fangyun Wei · Yue Gao · Zhirong Wu · Han Hu · Stephen Lin -
2021 Spotlight: Bootstrap Your Object Detector via Mixed Training »
Mengde Xu · Zheng Zhang · Fangyun Wei · Yutong Lin · Yue Cao · Stephen Lin · Han Hu · Xiang Bai -
2022 Poster: Could Giant Pre-trained Image Models Extract Universal Representations? »
Yutong Lin · Ze Liu · Zheng Zhang · Han Hu · Nanning Zheng · Stephen Lin · Yue Cao -
2022 Spotlight: Lightning Talks 2A-3 »
David Buterez · Chengan He · Xuan Kan · Yutong Lin · Konstantin Schürholt · Yu Yang · Louis Annabi · Wei Dai · Xiaotian Cheng · Alexandre Pitti · Ze Liu · Jon Paul Janet · Jun Saito · Boris Knyazev · Mathias Quoy · Zheng Zhang · James Zachary · Steven J Kiddle · Xavier Giro-i-Nieto · Chang Liu · Hejie Cui · Zilong Zhang · Hakan Bilen · Damian Borth · Dino Oglic · Holly Rushmeier · Han Hu · Xiangyang Ji · Yi Zhou · Nanning Zheng · Ying Guo · Pietro Liò · Stephen Lin · Carl Yang · Yue Cao -
2022 Spotlight: Could Giant Pre-trained Image Models Extract Universal Representations? »
Yutong Lin · Ze Liu · Zheng Zhang · Han Hu · Nanning Zheng · Stephen Lin · Yue Cao -
2022 Poster: Neural Matching Fields: Implicit Representation of Matching Fields for Visual Correspondence »
Sunghwan Hong · Jisu Nam · Seokju Cho · Susung Hong · Sangryul Jeon · Dongbo Min · Seungryong Kim -
2021 Poster: CATs: Cost Aggregation Transformers for Visual Correspondence »
Seokju Cho · Sunghwan Hong · Sangryul Jeon · Yunsung Lee · Kwanghoon Sohn · Seungryong Kim -
2021 Poster: The Emergence of Objectness: Learning Zero-shot Segmentation from Videos »
Runtao Liu · Zhirong Wu · Stella Yu · Stephen Lin -
2021 Poster: Aligning Pretraining for Detection via Object-Level Contrastive Learning »
Fangyun Wei · Yue Gao · Zhirong Wu · Han Hu · Stephen Lin -
2021 Poster: Bootstrap Your Object Detector via Mixed Training »
Mengde Xu · Zheng Zhang · Fangyun Wei · Yutong Lin · Yue Cao · Stephen Lin · Han Hu · Xiang Bai -
2020 Poster: RepPoints v2: Verification Meets Regression for Object Detection »
Yihong Chen · Zheng Zhang · Yue Cao · Liwei Wang · Stephen Lin · Han Hu