Skip to yearly menu bar Skip to main content


Poster
in
Workshop: New Frontiers in Graph Learning (GLFrontiers)

On the Adversarial Robustness of Graph Contrastive Learning Methods

Filippo Guerranti · Zinuo Yi · Anna Starovoit · Rafiq Kamel · Simon Geisler · Stephan Günnemann

Keywords: [ robustness evaluation ] [ Graph Contrastive Learning ] [ Adversarial Robustness ]


Abstract: Contrastive learning (CL) has emerged as a powerful framework for learning representations of images and text in a self-supervised manner while enhancing model robustness against adversarial attacks. More recently, researchers have extended the principles of contrastive learning to graph-structured data, giving birth to the field of graph contrastive learning (GCL). However, whether GCL methods can deliver the same advantages in adversarial robustness as their counterparts in the image and text domains remains an open question.In this paper, we introduce a comprehensive $\textit{robustness evaluation protocol}$ tailored to assess the robustness of GCL models. We subject these models to $\textit{adaptive}$ adversarial attacks targeting the graph structure, specifically in the evasion scenario. We evaluate node and graph classification tasks using diverse real-world datasets and attack strategies. With our work, we aim to offer insights into the robustness of GCL methods and hope to open avenues for potential future research directions.

Chat is not available.