`

Timezone: »

 
Poster
Shift-Robust GNNs: Overcoming the Limitations of Localized Graph Training data
Qi Zhu · Natalia Ponomareva · Jiawei Han · Bryan Perozzi

Tue Dec 07 04:30 PM -- 06:00 PM (PST) @ Virtual #None

There has been a recent surge of interest in designing Graph Neural Networks (GNNs) for semi-supervised learning tasks. Unfortunately this work has assumed that the nodes labeled for use in training were selected uniformly at random (i.e. are an IID sample). However in many real world scenarios gathering labels for graph nodes is both expensive and inherently biased -- so this assumption can not be met. GNNs can suffer poor generalization when this occurs, by overfitting to superfluous regularities present in the training data. In this work we present a method, Shift-Robust GNN (SR-GNN), designed to account for distributional differences between biased training data and the graph's true inference distribution. SR-GNN adapts GNN models for the presence of distributional shifts between the nodes which have had labels provided for training and the rest of the dataset. We illustrate the effectiveness of SR-GNN in a variety of experiments with biased training datasets on common GNN benchmark datasets for semi-supervised learning, where we see that SR-GNN outperforms other GNN baselines by accuracy, eliminating at least (~40%) of the negative effects introduced by biased training data. On the largest dataset we consider, ogb-arxiv, we observe an 2% absolute improvement over the baseline and reduce 30% of the negative effects.

Author Information

Qi Zhu (University of Illinois, Urbana Champaign)
Natalia Ponomareva (Google)
Jiawei Han (University of Illinois at Urbana-Champaign)
Bryan Perozzi (Google)

More from the Same Authors