Timezone: »

Coded Residual Transform for Generalizable Deep Metric Learning
Shichao Kan · Yixiong Liang · Min Li · Yigang Cen · Jianxin Wang · Zhihai He

Thu Dec 08 05:00 PM -- 07:00 PM (PST) @

A fundamental challenge in deep metric learning is the generalization capability of the feature embedding network model since the embedding network learned on training classes need to be evaluated on new test classes. To address this challenge, in this paper, we introduce a new method called coded residual transform (CRT) for deep metric learning to significantly improve its generalization capability. Specifically, we learn a set of diversified prototype features, project the feature map onto each prototype, and then encode its features using their projection residuals weighted by their correlation coefficients with each prototype. The proposed CRT method has the following two unique characteristics. First, it represents and encodes the feature map from a set of complimentary perspectives based on projections onto diversified prototypes. Second, unlike existing transformer-based feature representation approaches which encode the original values of features based on global correlation analysis, the proposed coded residual transform encodes the relative differences between the original features and their projected prototypes. Embedding space density and spectral decay analysis show that this multi perspective projection onto diversified prototypes and coded residual representation are able to achieve significantly improved generalization capability in metric learning. Finally, to further enhance the generalization performance, we propose to enforce the consistency on their feature similarity matrices between coded residual transforms with different sizes of projection prototypes and embedding dimensions. Our extensive experimental results and ablation studies demonstrate that the proposed CRT method outperform the state-of-the-art deep metric learning methods by large margins and improving upon the current best method by up to 4.28% on the CUB dataset.

Author Information

Shichao Kan (Central South University)
Yixiong Liang (Central South University)
Yixiong Liang

Yixiong Liang is currently an Professor of Computer Science in Central South University. Between 2011 and 2012, he was a visitor at the Robotics Institute, Carnegie Mellon University. From 2005 to 2007, he was a Postdoctoral Fellow in Institute of Automation, Chinese Academy of Science. He received the Ph.D., M.S. and B.S. degrees from Chongqing University, China, in 2005, 2002 and 1999, respectively. His research interests include computer vision and medical image analysis.

Min Li (Central South University)
Yigang Cen (Beijing jiaotong university)
Jianxin Wang (Central South University, China)
Zhihai He (Pengcheng Lab, Shenzhen P R China)

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors

  • 2023 Poster: Linear Time Algorithms for k-means with Multi-Swap Local Search »
    Junyu Huang · Qilong Feng · Ziyun Huang · Jinhui Xu · Jianxin Wang
  • 2023 Poster: V-InFoR: A Robust Graph Neural Networks Explainer for Structurally Corrupted Graphs »
    Jun Yin · Senzhang Wang · Chaozhuo Li · Xing Xie · Jianxin Wang
  • 2022 Spotlight: Lightning Talks 6B-4 »
    Junjie Chen · Chuanxia Zheng · JINLONG LI · Yu Shi · Shichao Kan · Yu Wang · Fermín Travi · Ninh Pham · Lei Chai · Guobing Gan · Tung-Long Vuong · Gonzalo Ruarte · Tao Liu · Li Niu · Jingjing Zou · Zequn Jie · Peng Zhang · Ming LI · Yixiong Liang · Guolin Ke · Jianfei Cai · Gaston Bujia · Sunzhu Li · Siyuan Zhou · Jingyang Lin · Xu Wang · Min Li · Zhuoming Chen · Qing Ling · Xiaolin Wei · Xiuqing Lu · Shuxin Zheng · Dinh Phung · Yigang Cen · Jianlou Si · Juan Esteban Kamienkowski · Jianxin Wang · Chen Qian · Lin Ma · Benyou Wang · Yingwei Pan · Tie-Yan Liu · Liqing Zhang · Zhihai He · Ting Yao · Tao Mei