Timezone: »
In this paper, we provide a theory of using graph neural networks (GNNs) for multi-node representation learning (where we are interested in learning a representation for a set of more than one node, such as link). We know that GNN is designed to learn single-node representations. When we want to learn a node set representation involving multiple nodes, a common practice in previous works is to directly aggregate the single-node representations obtained by a GNN into a joint node set representation. In this paper, we show a fundamental constraint of such an approach, namely the inability to capture the dependence between nodes in the node set, and argue that directly aggregating individual node representations does not lead to an effective joint representation for multiple nodes. Then, we notice that a few previous successful works for multi-node representation learning, including SEAL, Distance Encoding, and ID-GNN, all used node labeling. These methods first label nodes in the graph according to their relationships with the target node set before applying a GNN. Then, the node representations obtained in the labeled graph are aggregated into a node set representation. By investigating their inner mechanisms, we unify these node labeling techniques into a single and most general form---labeling trick. We prove that with labeling trick a sufficiently expressive GNN learns the most expressive node set representations, thus in principle solves any joint learning tasks over node sets. Experiments on one important two-node representation learning task, link prediction, verified our theory. Our work explains the superior performance of previous node-labeling-based methods, and establishes a theoretical foundation of using GNNs for multi-node representation learning.
Author Information
Muhan Zhang (Peking University)
Pan Li (Stanford University)
Yinglong Xia (University of Southern California)
Kai Wang (Facebook)
Long Jin (Facebook)
More from the Same Authors
-
2021 Spotlight: Generic Neural Architecture Search via Regression »
Yuhong Li · Cong Hao · Pan Li · Jinjun Xiong · Deming Chen -
2021 : Semi-supervised Graph Neural Network for Particle-level Noise Removal »
Tianchun Li · Shikun Liu · Nhan Tran · Mia Liu · Pan Li -
2022 Poster: Rethinking Knowledge Graph Evaluation Under the Open-World Assumption »
Haotong Yang · Zhouchen Lin · Muhan Zhang -
2022 Poster: Geodesic Graph Neural Network for Efficient Graph Representation Learning »
Lecheng Kong · Yixin Chen · Muhan Zhang -
2022 Poster: How Powerful are K-hop Message Passing Graph Neural Networks »
Jiarui Feng · Yixin Chen · Fuhai Li · Anindya Sarkar · Muhan Zhang -
2021 Poster: Generic Neural Architecture Search via Regression »
Yuhong Li · Cong Hao · Pan Li · Jinjun Xiong · Deming Chen -
2021 Poster: Decoupling the Depth and Scope of Graph Neural Networks »
Hanqing Zeng · Muhan Zhang · Yinglong Xia · Ajitesh Srivastava · Andrey Malevich · Rajgopal Kannan · Viktor Prasanna · Long Jin · Ren Chen -
2021 Poster: Local Hyper-Flow Diffusion »
Kimon Fountoulakis · Pan Li · Shenghao Yang -
2021 Poster: Adversarial Graph Augmentation to Improve Graph Contrastive Learning »
Susheel Suresh · Pan Li · Cong Hao · Jennifer Neville -
2021 Poster: Nested Graph Neural Networks »
Muhan Zhang · Pan Li -
2019 Poster: D-VAE: A Variational Autoencoder for Directed Acyclic Graphs »
Muhan Zhang · Shali Jiang · Zhicheng Cui · Roman Garnett · Yixin Chen -
2018 Poster: Link Prediction Based on Graph Neural Networks »
Muhan Zhang · Yixin Chen -
2018 Spotlight: Link Prediction Based on Graph Neural Networks »
Muhan Zhang · Yixin Chen -
2018 Poster: Revisiting Decomposable Submodular Function Minimization with Incidence Relations »
Pan Li · Olgica Milenkovic -
2018 Poster: Quadratic Decomposable Submodular Function Minimization »
Pan Li · Niao He · Olgica Milenkovic -
2017 Poster: Inhomogeneous Hypergraph Clustering with Applications »
Pan Li · Olgica Milenkovic -
2017 Spotlight: Inhomogoenous Hypergraph Clustering with Applications »
Pan Li · Olgica Milenkovic