Poster
Lambda: Learning Matchable Prior For Entity Alignment with Unlabeled Dangling Cases
Hang Yin · Liyao Xiang · Dong Ding · Yuheng He · Yihan Wu · Pengzhi Chu · Xinbing Wang · Chenghu Zhou
East Exhibit Hall A-C #2709
We investigate the entity alignment (EA) problem with unlabeled dangling cases, meaning that partial entities have no counterparts in the other knowledge graph (KG), yet these entities are unlabeled. The problem arises when the source and target graphs are of different scales, and it is much cheaper to label the matchable pairs than the dangling entities. To address this challenge, we propose the framework \textit{Lambda} for dangling detection and entity alignment. Lambda features a GNN-based encoder called KEESA with a spectral contrastive learning loss for EA and a positive-unlabeled learning algorithm called iPULE for dangling detection. Our dangling detection module offers theoretical guarantees of unbiasedness, uniform deviation bounds, and convergence. Experimental results demonstrate that each component contributes to overall performances that are superior to baselines, even when baselines additionally exploit 30\% of dangling entities labeled for training.
Live content is unavailable. Log in and register to view live content