Timezone: »
Poster
Long Range Graph Benchmark
Vijay Prakash Dwivedi · Ladislav Rampášek · Michael Galkin · Ali Parviz · Guy Wolf · Anh Tuan Luu · Dominique Beaini
Graph Neural Networks (GNNs) that are based on the message passing (MP) paradigm generally exchange information between 1-hop neighbors to build node representations at each layer. In principle, such networks are not able to capture long-range interactions (LRI) that may be desired or necessary for learning a given task on graphs. Recently, there has been an increasing interest in development of Transformer-based methods for graphs that can consider full node connectivity beyond the original sparse structure, thus enabling the modeling of LRI. However, MP-GNNs that simply rely on 1-hop message passing often fare better in several existing graph benchmarks when combined with positional feature representations, among other innovations, hence limiting the perceived utility and ranking of Transformer-like architectures. Here, we present the Long Range Graph Benchmark (LRGB) with 5 graph learning datasets: $\texttt{PascalVOC-SP}$, $\texttt{COCO-SP}$, $\texttt{PCQM-Contact}$, $\texttt{Peptides-func}$ and $\texttt{Peptides-struct}$ that arguably require LRI reasoning to achieve strong performance in a given task. We benchmark both baseline GNNs and Graph Transformer networks to verify that the models which capture long-range dependencies perform significantly better on these tasks. Therefore, these datasets are suitable for benchmarking and exploration of MP GNNs and Graph Transformer architectures that are intended to capture LRI.
Author Information
Vijay Prakash Dwivedi (Nanyang Technological University, Singapore)
Ladislav Rampášek (Université de Montréal)
Michael Galkin (Mila, McGill University)
Ali Parviz (New Jersey Institute of technology)
Guy Wolf (Université de Montréal; Mila)
Anh Tuan Luu (Nanyang Technological University, Singapore)
Dominique Beaini (Polytechnique Montreal)
More from the Same Authors
-
2022 : Generalized Laplacian Positional Encoding for Graph Representation Learning »
Sohir Maskey · Ali Parviz · Maximilian Thiessen · Hannes Stärk · Ylli Sadikaj · Haggai Maron -
2022 Spotlight: Lightning Talks 6A-4 »
Xiu-Shen Wei · Konstantina Dritsa · Guillaume Huguet · ABHRA CHAUDHURI · Zhenbin Wang · Kevin Qinghong Lin · Yutong Chen · Jianan Zhou · Yongsen Mao · Junwei Liang · Jinpeng Wang · Mao Ye · Yiming Zhang · Aikaterini Thoma · H.-Y. Xu · Daniel Sumner Magruder · Enwei Zhang · Jianing Zhu · Ronglai Zuo · Massimiliano Mancini · Hanxiao Jiang · Jun Zhang · Fangyun Wei · Faen Zhang · Ioannis Pavlopoulos · Zeynep Akata · Xiatian Zhu · Jingfeng ZHANG · Alexander Tong · Mattia Soldan · Chunhua Shen · Yuxin Peng · Liuhan Peng · Michael Wray · Tongliang Liu · Anjan Dutta · Yu Wu · Oluwadamilola Fasina · Panos Louridas · Angel Chang · Manik Kuchroo · Manolis Savva · Shujie LIU · Wei Zhou · Rui Yan · Gang Niu · Liang Tian · Bo Han · Eric Z. XU · Guy Wolf · Yingying Zhu · Brian Mak · Difei Gao · Masashi Sugiyama · Smita Krishnaswamy · Rong-Cheng Tu · Wenzhe Zhao · Weijie Kong · Chengfei Cai · WANG HongFa · Dima Damen · Bernard Ghanem · Wei Liu · Mike Zheng Shou -
2022 Spotlight: Manifold Interpolating Optimal-Transport Flows for Trajectory Inference »
Guillaume Huguet · Daniel Sumner Magruder · Alexander Tong · Oluwadamilola Fasina · Manik Kuchroo · Guy Wolf · Smita Krishnaswamy -
2022 Poster: Inductive Logical Query Answering in Knowledge Graphs »
Michael Galkin · Zhaocheng Zhu · Hongyu Ren · Jian Tang -
2022 Poster: Can Hybrid Geometric Scattering Networks Help Solve the Maximum Clique Problem? »
Yimeng Min · Frederik Wenkel · Michael Perlmutter · Guy Wolf -
2022 Poster: Manifold Interpolating Optimal-Transport Flows for Trajectory Inference »
Guillaume Huguet · Daniel Sumner Magruder · Alexander Tong · Oluwadamilola Fasina · Manik Kuchroo · Guy Wolf · Smita Krishnaswamy -
2022 Poster: Recipe for a General, Powerful, Scalable Graph Transformer »
Ladislav Rampášek · Michael Galkin · Vijay Prakash Dwivedi · Anh Tuan Luu · Guy Wolf · Dominique Beaini -
2021 Poster: Contrastive Learning for Neural Topic Model »
Thong Nguyen · Anh Tuan Luu -
2021 Poster: How Should Pre-Trained Language Models Be Fine-Tuned Towards Adversarial Robustness? »
Xinshuai Dong · Anh Tuan Luu · Min Lin · Shuicheng Yan · Hanwang Zhang -
2019 Poster: Compositional De-Attention Networks »
Yi Tay · Anh Tuan Luu · Aston Zhang · Shuohang Wang · Siu Cheung Hui