Timezone: »

Be Confident! Towards Trustworthy Graph Neural Networks via Confidence Calibration
Xiao Wang · Hongrui Liu · Chuan Shi · Cheng Yang

Tue Dec 07 04:30 PM -- 06:00 PM (PST) @

Despite Graph Neural Networks (GNNs) have achieved remarkable accuracy, whether the results are trustworthy is still unexplored. Previous studies suggest that many modern neural networks are over-confident on the predictions, however, surprisingly, we discover that GNNs are primarily in the opposite direction, i.e., GNNs are under-confident. Therefore, the confidence calibration for GNNs is highly desired. In this paper, we propose a novel trustworthy GNN model by designing a topology-aware post-hoc calibration function. Specifically, we first verify that the confidence distribution in a graph has homophily property, and this finding inspires us to design a calibration GNN model (CaGCN) to learn the calibration function. CaGCN is able to obtain a unique transformation from logits of GNNs to the calibrated confidence for each node, meanwhile, such transformation is able to preserve the order between classes, satisfying the accuracy-preserving property. Moreover, we apply the calibration GNN to self-training framework, showing that more trustworthy pseudo labels can be obtained with the calibrated confidence and further improve the performance. Extensive experiments demonstrate the effectiveness of our proposed model in terms of both calibration and accuracy.

Author Information

Xiao Wang (Beijing University of Post and Telecommunication)
Hongrui Liu (Beijing University of Posts and Telecommunication)
Chuan Shi (Beijing University of Post and Telecommunication, Tsinghua University)
Cheng Yang (Beijing University of Post and Telecommunication)

More from the Same Authors