Timezone: »
Graph contrastive learning attracts/disperses node representations for similar/dissimilar node pairs under some notion of similarity. It may be combined with a low-dimensional embedding of nodes to preserve intrinsic and structural properties of a graph. In this paper, we extend the celebrated Laplacian Eigenmaps with contrastive learning, and call them COntrastive Laplacian EigenmapS (COLES). Starting from a GAN-inspired contrastive formulation, we show that the Jensen-Shannon divergence underlying many contrastive graph embedding models fails under disjoint positive and negative distributions, which may naturally emerge during sampling in the contrastive setting. In contrast, we demonstrate analytically that COLES essentially minimizes a surrogate of Wasserstein distance, which is known to cope well under disjoint distributions. Moreover, we show that the loss of COLES belongs to the family of so-called block-contrastive losses, previously shown to be superior compared to pair-wise losses typically used by contrastive methods. We show on popular benchmarks/backbones that COLES offers favourable accuracy/scalability compared to DeepWalk, GCN, Graph2Gauss, DGI and GRACE baselines.
Author Information
Hao Zhu (Australian National University)
Ke Sun (Data61 and Australian National University)
Peter Koniusz
More from the Same Authors
-
2022 Poster: Generalized Laplacian Eigenmaps »
Hao Zhu · Piotr Koniusz -
2021 Poster: On the Variance of the Fisher Information for Deep Learning »
Alexander Soen · Ke Sun -
2018 Poster: Representation Learning of Compositional Data »
Marta Avalos · Richard Nock · Cheng Soon Ong · Julien Rouar · Ke Sun