Timezone: »
The widespread deployment of Graph Neural Networks (GNNs) sparks significant interest in their explainability, which plays a vital role in model auditing and ensuring trustworthy graph learning. The objective of GNN explainability is to discern the underlying graph structures that have the most significant impact on model predictions. Ensuring that explanations generated are reliable necessitates consideration of the in-distribution property, particularly due to the vulnerability of GNNs to out-of-distribution data. Unfortunately, prevailing explainability methods tend to constrain the generated explanations to the structure of the original graph, thereby downplaying the significance of the in-distribution property and resulting in explanations that lack reliability.To address these challenges, we propose D4Explainer, a novel approach that provides in-distribution GNN explanations for both counterfactual and model-level explanation scenarios. The proposed D4Explainer incorporates generative graph distribution learning into the optimization objective, which accomplishes two goals: 1) generate a collection of diverse counterfactual graphs that conform to the in-distribution property for a given instance, and 2) identify the most discriminative graph patterns that contribute to a specific class prediction, thus serving as model-level explanations. It is worth mentioning that D4Explainer is the first unified framework that combines both counterfactual and model-level explanations.Empirical evaluations conducted on synthetic and real-world datasets provide compelling evidence of the state-of-the-art performance achieved by D4Explainer in terms of explanation accuracy, faithfulness, diversity, and robustness.
Author Information
Jialin Chen (Yale University)
Shirley Wu (Computer Science Department, Stanford University)
Abhijit Gupta (Yale University)
Rex Ying (Yale University)
More from the Same Authors
-
2022 : GraphFramEx: Towards Systematic Evaluation of Explainability Methods for Graph Neural Networks »
Kenza Amara · Rex Ying · Ce Zhang -
2022 : Learning Efficient Hybrid Particle-continuum Representations of Non-equilibrium N-body Systems »
Tailin Wu · Michael Sun · Hsuan-Gu Chou · Pranay Reddy Samala · Sithipont Cholsaipant · Sophia Kivelson · Jacqueline Yau · Rex Ying · E. Paulo Alves · Jure Leskovec · Frederico Fiuza -
2022 : How Powerful is Implicit Denoising in Graph Neural Networks »
Songtao Liu · Rex Ying · Hanze Dong · Lu Lin · Jinghui Chen · Dinghao Wu -
2022 : Efficient Automatic Machine Learning via Design Graphs »
Shirley Wu · Jiaxuan You · Jure Leskovec · Rex Ying -
2022 : GraphFramEx: Towards Systematic Evaluation of Explainability Methods for Graph Neural Networks »
Kenza Amara · Rex Ying · Zitao Zhang · Zhihao Han · Yinan Shan · Ulrik Brandes · Sebastian Schemm -
2023 : GInX-Eval: Towards In-Distribution Evaluation of Graph Neural Network Explanations »
Kenza Amara · Mennatallah El-Assady · Rex Ying -
2023 : FAFormer: Frame Averaging Transformer for Predicting Nucleic Acid-Protein Interactions »
Tinglin Huang · Zhenqiao Song · Rex Ying · Wengong Jin -
2023 Workshop: New Frontiers in Graph Learning (GLFrontiers) »
Jiaxuan You · Rex Ying · Hanjun Dai · Ge Liu · Azalia Mirhoseini · Smita Krishnaswamy -
2023 Poster: Static and Sequential Malicious Attacks in the Context of Selective Forgetting »
Chenxu Zhao · Wei Qian · Rex Ying · Mengdi Huai -
2023 Poster: Learning to Group Auxiliary Datasets for Molecule »
Tinglin Huang · Ziniu Hu · Rex Ying -
2023 Poster: MuSe-GNN: Learning Unified Gene Representation From Multimodal Biological Graph Data »
Tianyu Liu · Yuge Wang · Rex Ying · Hongyu Zhao -
2023 Poster: TempME: Towards the Explainability of Temporal Graph Neural Networks via Motif Discovery »
Jialin Chen · Rex Ying -
2022 Workshop: New Frontiers in Graph Learning »
Jiaxuan You · Marinka Zitnik · Rex Ying · Yizhou Sun · Hanjun Dai · Stefanie Jegelka -
2022 : Invited Talk »
Rex Ying