Skip to yearly menu bar Skip to main content


Poster

Tackling Uncertain Correspondences for Multi-Modal Entity Alignment

Liyi Chen · Ying Sun · Shengzhe Zhang · Yuyang Ye · Wei Wu · Hui Xiong

East Exhibit Hall A-C #3605
[ ]
Fri 13 Dec 11 a.m. PST — 2 p.m. PST

Abstract:

Multi-modal entity alignment is crucial for integrating multi-modal knowledge graphs originating from different data sources.Existing works mainly focus on fully depicting entity features by designing various modality encoders or fusion approaches.However, uncertain correspondences between inter-modal or intra-modal cues, such as weak inter-modal associations, description diversity, and modality absence, still hinder the effective exploration of aligned entity similarities.To this end, in this paper, we propose a novel Tackling uncertain correspondences method for Multi-modal Entity Alignment (TMEA).Specifically, to handle diverse attribute knowledge descriptions, we design alignment-augmented abstract representation that incorporates the large language model and in-context learning into attribute alignment and filtering for generating and embedding the attribute abstract.In order to mitigate the influence of the modality absence, we propose to unify all modality features into a shared latent subspace and generate pseudo features via variational autoencoders according to existing modal features.Then, we develop an inter-modal commonality enhancement mechanism based on cross-attention with orthogonal constraints, to address weak semantic associations between modalities.Extensive experiments on two real-world datasets validate the effectiveness of TMEA with a clear improvement over competitive baselines.

Live content is unavailable. Log in and register to view live content