Timezone: »

How Powerful are K-hop Message Passing Graph Neural Networks
Jiarui Feng · Yixin Chen · Fuhai Li · Anindya Sarkar · Muhan Zhang

Thu Dec 01 09:00 AM -- 11:00 AM (PST) @ Hall J #1035
The most popular design paradigm for Graph Neural Networks (GNNs) is 1-hop message passing---aggregating information from 1-hop neighbors repeatedly. However, the expressive power of 1-hop message passing is bounded by the Weisfeiler-Lehman (1-WL) test. Recently, researchers extended 1-hop message passing to $K$-hop message passing by aggregating information from $K$-hop neighbors of nodes simultaneously. However, there is no work on analyzing the expressive power of $K$-hop message passing. In this work, we theoretically characterize the expressive power of $K$-hop message passing. Specifically, we first formally differentiate two different kernels of $K$-hop message passing which are often misused in previous works. We then characterize the expressive power of $K$-hop message passing by showing that it is more powerful than 1-WL and can distinguish almost all regular graphs. Despite the higher expressive power, we show that $K$-hop message passing still cannot distinguish some simple regular graphs and its expressive power is bounded by 3-WL. To further enhance its expressive power, we introduce a KP-GNN framework, which improves $K$-hop message passing by leveraging the peripheral subgraph information in each hop. We show that KP-GNN can distinguish many distance regular graphs which could not be distinguished by previous distance encoding or 3-WL methods. Experimental results verify the expressive power and effectiveness of KP-GNN. KP-GNN achieves competitive results across all benchmark datasets.

Author Information

Jiarui Feng (Washington University, Saint Louis)
Yixin Chen (Washington University in St. Louis)
Fuhai Li (Washington University in St Louis)
Anindya Sarkar (Indian Institute of Technology Hyderabad)
Muhan Zhang (Peking University)

More from the Same Authors