Skip to yearly menu bar Skip to main content


Poster

RegExplainer: Generating Explanations for Graph Neural Networks in Regression Tasks

Jiaxing Zhang · Zhuomin Chen · hao mei · Longchao Da · Dongsheng Luo · Hua Wei

West Ballroom A-D #7100
[ ]
Wed 11 Dec 11 a.m. PST — 2 p.m. PST

Abstract:

Graph regression is a fundamental task that has gained significant attention in various graph learning tasks.However, the inference process is often not easily interpretable. Current explanation techniques are limited to understanding GNN behaviors in classification tasks, leaving an explanation gap for graph regression models. In this work, we propose a novel explanation method to interpret the graph regression models (XAIG-R). Our method addresses the distribution shifting problem and continuously ordered decision boundary issues that hinder existing methods away from being applied in regression tasks.We introduce a novel objective based on the graph information bottleneck theory (GIB) and a new mix-up framework, which could support various GNNs and explainers in a model-agnostic manner. Additionally, we present a self-supervised learning strategy to tackle the continuously ordered labels in regression tasks. We evaluate our proposed method on three benchmark datasets and a real-life dataset introduced by us, and extensive experiments demonstrate its effectiveness in interpreting GNN models in regression tasks.

Live content is unavailable. Log in and register to view live content