Skip to yearly menu bar Skip to main content


Poster

Gradient Rewiring for Editable Graph Neural Network Training

Zhimeng Jiang · Zirui Liu · Xiaotian Han · Qizhang Feng · Hongye Jin · Qiaoyu Tan · Kaixiong Zhou · Na Zou · Xia Hu

West Ballroom A-D #6808
[ ]
Fri 13 Dec 4:30 p.m. PST — 7:30 p.m. PST

Abstract:

Deep neural networks are ubiquitously adopted in many applications, such as computer vision, natural language processing, and graph analytics. However, well-trained neural networks can make prediction errors after deployment as the world changes. \textit{Model editing} updates the base model to patch prediction errors with less accessible training data information and computation resources.Despite recent advances in model editors in computer vision and natural language processing, editable training in graph neural networks (GNNs) is rarely explored. The challenge for editable GNNs training falls in the inherent information aggregation across neighbors and thus model editors may mislead normal nodes' prediction. In this paper, we first observe the gradient of cross-entropy loss for the target node and training nodes with significant inconsistency, which indicates that directly fine-tuning the base model using the loss of the target node deteriorates the performance of training nodes. Motivated by the gradient inconsistency issue, we propose a simple yet effective \underline{G}radient \underline{R}ewiring method for \underline{E}ditable graph neural network training, named \textbf{GRE}. Specifically, we first store the anchor gradient of the loss for training nodes to preserve the locality. Subsequently, we rewire the gradient of loss for the target node to preserve performance for the training node using anchor gradient. Experiments demonstrate the effectiveness of GRE on various model architectures and graph datasets of different types and scales.

Live content is unavailable. Log in and register to view live content