Skip to yearly menu bar Skip to main content


Poster
in
Workshop: New Frontiers in Graph Learning (GLFrontiers)

On the Consistency of GNN Explainability Methods

Ehsan Hajiramezanali · Sepideh Maleki · Alex Tseng · Aicha BenTaieb · Gabriele Scalia · Tommaso Biancalani

Keywords: [ Graph Explainability ] [ consistency ]


Abstract:

Despite the widespread utilization of post-hoc explanation methods for graph neural networks (GNNs) in high-stakes settings, there has been a lack of comprehensive evaluation regarding their quality and reliability. This evaluation is challenging primarily due to non-Euclidean nature of the data, arbitrary size, and complex topological structure. In this context, we argue that the \emph{consistency} of GNN explanations, denoting the ability to produce similar explanations for input graphs with minor structural changes that do not alter their output predictions, is a key requirement for effective post-hoc GNN explanations. To fulfill this gap, we introduce a novel metric based on Fused Gromov-Wasserstein distance to quantify consistency.

Chat is not available.