Timezone: »
In this paper, we propose a generalized Unsupervised Manifold Alignment (GUMA) method to build the connections between different but correlated datasets without any known correspondences. Based on the assumption that datasets of the same theme usually have similar manifold structures, GUMA is formulated into an explicit integer optimization problem considering the structure matching and preserving criteria, as well as the feature comparability of the corresponding points in the mutual embedding space. The main benefits of this model include: (1) simultaneous discovery and alignment of manifold structures; (2) fully unsupervised matching without any pre-specified correspondences; (3) efficient iterative alignment without computations in all permutation cases. Experimental results on dataset matching and real-world applications demonstrate the effectiveness and the practicability of our manifold alignment method.
Author Information
Zhen Cui (Vision and Machine Learning Lab)
Hong Chang (Chinese Academy of Sciences)
Shiguang Shan (Chinese Academy of Sciences)
Xilin Chen (Institute of Computing Technology, Chinese Academy of Sciences)
More from the Same Authors
-
2022 Poster: Optimal Positive Generation via Latent Transformation for Contrastive Learning »
Hong Chang · Hong Chang · Bingpeng MA · Shiguang Shan · Xilin Chen -
2023 Poster: Understanding Few-Shot Learning: Measuring Task Relatedness and Adaptation Difficulty via Attributes »
Minyang Hu · Hong Chang · Zong Guo · Bingpeng MA · Shiguang Shan · Xilin Chen -
2023 Poster: Glance and Focus: Memory Prompting for Multi-Event Video Question Answering »
Ziyi Bai · Ruiping Wang · Xilin Chen -
2023 Poster: Generalized Semi-Supervised Learning via Self-Supervised Feature Adaptation »
Jiachen Liang · RuiBing Hou · Hong Chang · Bingpeng MA · Shiguang Shan · Xilin Chen -
2022 Spotlight: Lightning Talks 3B-4 »
Guanghu Yuan · Yijing Liu · Li Yang · Yongri Piao · Zekang Zhang · Yaxin Xiao · Lin Chen · Hong Chang · Fajie Yuan · Guangyu Gao · Hong Chang · Qinxian Liu · Zhixiang Wei · Qingqing Ye · Chenyang Lu · Jian Meng · Haibo Hu · Xin Jin · Yudong Li · Miao Zhang · Zhiyuan Fang · Jae-sun Seo · Bingpeng MA · Jian-Wei Zhang · Shiguang Shan · Haozhe Feng · Huaian Chen · Deliang Fan · Huadi Zheng · Jianbo Jiao · Huchuan Lu · Beibei Kong · Miao Zheng · Chengfang Fang · Shujie Li · Zhongwei Wang · Yunchao Wei · Xilin Chen · Jie Shi · Kai Chen · Zihan Zhou · Lei Chen · Yi Jin · Wei Chen · Min Yang · Chenyun YU · Bo Hu · Zang Li · Yu Xu · Xiaohu Qie -
2022 Spotlight: Optimal Positive Generation via Latent Transformation for Contrastive Learning »
Hong Chang · Hong Chang · Bingpeng MA · Shiguang Shan · Xilin Chen -
2021 Poster: HRFormer: High-Resolution Vision Transformer for Dense Predict »
YUHUI YUAN · Rao Fu · Lang Huang · Weihong Lin · Chao Zhang · Xilin Chen · Jingdong Wang -
2019 Poster: Cross Attention Network for Few-shot Classification »
Ruibing Hou · Hong Chang · Bingpeng MA · Shiguang Shan · Xilin Chen -
2019 Poster: Multi-label Co-regularization for Semi-supervised Facial Action Unit Recognition »
Xuesong Niu · Hu Han · Shiguang Shan · Xilin Chen -
2014 Poster: Self-Paced Learning with Diversity »
Lu Jiang · Deyu Meng · Shoou-I Yu · Zhenzhong Lan · Shiguang Shan · Alexander Hauptmann