Skip to yearly menu bar Skip to main content


Poster

Generalized Laplacian Eigenmaps

Hao Zhu · Piotr Koniusz

Hall J (level 1) #436

Keywords: [ GCL ] [ rank minimization ] [ logdet ] [ Graph Contrastive Learning ] [ Node Embedding ]


Abstract: Graph contrastive learning attracts/disperses node representations for similar/dissimilar node pairs under some notion of similarity. It may be combined with a low-dimensional embedding of nodes to preserve intrinsic and structural properties of a graph. COLES, a recent graph contrastive method combines traditional graph embedding and negative sampling into one framework. COLES in fact minimizes the trace difference between the within-class scatter matrix encapsulating the graph connectivity and the total scatter matrix encapsulating negative sampling. In this paper, we propose a more essential framework for graph embedding, called Generalized Laplacian EigeNmaps (GLEN), which learns a graph representation by maximizing the rank difference between the total scatter matrix and the within-class scatter matrix, resulting in the minimum class separation guarantee. However, the rank difference minimization is an NP-hard problem. Thus, we replace the trace difference that corresponds to the difference of nuclear norms by the difference of LogDet expressions, which we argue is a more accurate surrogate for the NP-hard rank difference than the trace difference. While enjoying a lesser computational cost, the difference of LogDet terms is lower-bounded by the Affine-invariant Riemannian metric (AIRM) and Jesen-Bregman the LogDet Divergence (JBLD), and upper-bounded by AIRM scaled by the factor of $\sqrt{m}$. We show that GLEN offers favourable accuracy/scalability compared to state-of-the-art baselines.

Chat is not available.