Timezone: »

Spatial-Aware Feature Aggregation for Image based Cross-View Geo-Localization
Yujiao Shi · Liu Liu · Xin Yu · Hongdong Li

Thu Dec 12 05:00 PM -- 07:00 PM (PST) @ East Exhibition Hall B + C #100

In this paper, we develop a new deep network to explicitly address these inherent differences between ground and aerial views. We observe there exist some approximate domain correspondences between ground and aerial images. Specifically, pixels lying on the same azimuth direction in an aerial image approximately correspond to a vertical image column in the ground view image. Thus, we propose a two-step approach to exploit this prior knowledge. The first step is to apply a regular polar transform to warp an aerial image such that its domain is closer to that of a ground-view panorama. Note that polar transform as a pure geometric transformation is agnostic to scene content, hence cannot bring the two domains into full alignment. Then, we add a subsequent spatial-attention mechanism which further brings corresponding deep features closer in the embedding space. To improve the robustness of feature representation, we introduce a feature aggregation strategy via learning multiple spatial embeddings. By the above two-step approach, we achieve more discriminative deep representations, facilitating cross-view Geo-localization more accurate. Our experiments on standard benchmark datasets show significant performance boosting, achieving more than doubled recall rate compared with the previous state of the art.

Author Information

Yujiao Shi (Australian National University)
Liu Liu (ANU)
Xin Yu (Australian National University)
Hongdong Li (Australian National University)

More from the Same Authors