Timezone: »

Revisiting Heterophily For Graph Neural Networks
Sitao Luan · Chenqing Hua · Qincheng Lu · Jiaqi Zhu · Mingde Zhao · Shuyuan Zhang · Xiao-Wen Chang · Doina Precup

Thu Dec 01 02:00 PM -- 04:00 PM (PST) @ Hall J #236

Graph Neural Networks (GNNs) extend basic Neural Networks (NNs) by using graph structures based on the relational inductive bias (homophily assumption). While GNNs have been commonly believed to outperform NNs in real-world tasks, recent work has identified a non-trivial set of datasets where their performance compared to NNs is not satisfactory. Heterophily has been considered the main cause of this empirical observation and numerous works have been put forward to address it. In this paper, we first revisit the widely used homophily metrics and point out that their consideration of only graph-label consistency is a shortcoming. Then, we study heterophily from the perspective of post-aggregation node similarity and define new homophily metrics, which are potentially advantageous compared to existing ones. Based on this investigation, we prove that some harmful cases of heterophily can be effectively addressed by local diversification operation. Then, we propose the Adaptive Channel Mixing (ACM), a framework to adaptively exploit aggregation, diversification and identity channels to extract richer localized information in each baseline GNN layer. ACM is more powerful than the commonly used uni-channel framework for node classification tasks on heterophilic graphs. When evaluated on 10 benchmark node classification tasks, ACM-augmented baselines consistently achieve significant performance gain, exceeding state-of-the-art GNNs on most tasks without incurring significant computational burden.

Author Information

Sitao Luan (McGill University, Mila)

I’m a second year Ph.D. student working with Professor Doina Precup and Professor Xiao-Wen Chang on the cross area of reinforcement learning and matrix computations. I’m currently interested in approximate dynamic programming and Krylov subspace methods. I'm currently working on constructiong basis functions for value function approximation in model-based reinforcement learning.

Chenqing Hua (McGill University)
Qincheng Lu (McGill University)
Jiaqi Zhu (McGill University)
Mingde Zhao (McGill University)
Shuyuan Zhang (Mcgill University / Mila)
Xiao-Wen Chang (McGill University)
Doina Precup (McGill University / Mila / DeepMind Montreal)

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors