Skip to yearly menu bar Skip to main content


Graph Mixture of Experts: Learning on Large-Scale Graphs with Explicit Diversity Modeling

Haotao Wang · Ziyu Jiang · Yuning You · Yan Han · Gaowen Liu · Jayanth Srinivasa · Ramana Kompella · Zhangyang "Atlas" Wang

Great Hall & Hall B1+B2 (level 1) #2016
[ ]
Wed 13 Dec 8:45 a.m. PST — 10:45 a.m. PST

Abstract: Graph neural networks (GNNs) have found extensive applications in learning from graph data. However, real-world graphs often possess diverse structures and comprise nodes and edges of varying types. To bolster the generalization capacity of GNNs, it has become customary to augment training graph structures through techniques like graph augmentations and large-scale pre-training on a wider array of graphs. Balancing this diversity while avoiding increased computational costs and the notorious trainability issues of GNNs is crucial. This study introduces the concept of Mixture-of-Experts (MoE) to GNNs, with the aim of augmenting their capacity to adapt to a diverse range of training graph structures, without incurring explosive computational overhead. The proposed Graph Mixture of Experts (GMoE) model empowers individual nodes in the graph to dynamically and adaptively select more general information aggregation experts. These experts are trained to capture distinct subgroups of graph structures and to incorporate information with varying hop sizes, where those with larger hop sizes specialize in gathering information over longer distances. The effectiveness of GMoE is validated through a series of experiments on a diverse set of tasks, including graph, node, and link prediction, using the OGB benchmark. Notably, it enhances ROC-AUC by $1.81\%$ in ogbg-molhiv and by $1.40\%$ in ogbg-molbbbp, when compared to the non-MoE baselines. Our code is publicly available at

Chat is not available.