Timezone: »
Graph neural networks (GNNs) have been widely used in representation learning on graphs and achieved state-of-the-art performance in tasks such as node classification and link prediction. However, most existing GNNs are designed to learn node representations on the fixed and homogeneous graphs. The limitations especially become problematic when learning representations on a misspecified graph or a heterogeneous graph that consists of various types of nodes and edges. In this paper, we propose Graph Transformer Networks (GTNs) that are capable of generating new graph structures, which involve identifying useful connections between unconnected nodes on the original graph, while learning effective node representation on the new graphs in an end-to-end fashion. Graph Transformer layer, a core layer of GTNs, learns a soft selection of edge types and composite relations for generating useful multi-hop connections so-call meta-paths. Our experiments show that GTNs learn new graph structures, based on data and tasks without domain knowledge, and yield powerful node representation via convolution on the new graphs. Without domain-specific graph preprocessing, GTNs achieved the best performance in all three benchmark node classification tasks against the state-of-the-art methods that require pre-defined meta-paths from domain knowledge.
Author Information
Seongjun Yun (Korea university)
Minbyul Jeong (Korea university)
Raehyun Kim (Korea university)
Jaewoo Kang (Korea University)
Hyunwoo Kim (Korea University)
More from the Same Authors
-
2023 Poster: Advancing Bayesian Optimization via Learning Smooth Latent Spaces »
Seunghun Lee · Jaewon Chu · Sihyeon Kim · Juyeon Ko · Hyunwoo Kim -
2023 Poster: NuTrea: Neural Tree Search for Context-guided Multi-hop KGQA »
Hyeong Kyu Choi · Seunghun Lee · Jaewon Chu · Hyunwoo Kim -
2023 Poster: Unconstrained Pose Prior-Free Neural Radiance Field »
Injae Kim · Minhyuk Choi · Hyunwoo Kim -
2022 Poster: TokenMixup: Efficient Attention-guided Token-level Data Augmentation for Transformers »
Hyeong Kyu Choi · Joonmyung Choi · Hyunwoo Kim -
2022 Poster: Invertible Monotone Operators for Normalizing Flows »
Byeongkeun Ahn · Chiyoon Kim · Youngjoon Hong · Hyunwoo Kim -
2022 Poster: SageMix: Saliency-Guided Mixup for Point Clouds »
Sanghyeok Lee · Minkyu Jeon · Injae Kim · Yunyang Xiong · Hyunwoo Kim -
2021 Poster: Metropolis-Hastings Data Augmentation for Graph Neural Networks »
Hyeonjin Park · Seunghun Lee · Sihyeon Kim · Jinyoung Park · Jisu Jeong · Kyung-Min Kim · Jung-Woo Ha · Hyunwoo Kim -
2021 Poster: Neo-GNNs: Neighborhood Overlap-aware Graph Neural Networks for Link Prediction »
Seongjun Yun · Seoyoon Kim · Junhyun Lee · Jaewoo Kang · Hyunwoo Kim -
2020 Poster: Self-supervised Auxiliary Learning with Meta-paths for Heterogeneous Graphs »
Dasol Hwang · Jinyoung Park · Sunyoung Kwon · KyungMin Kim · Jung-Woo Ha · Hyunwoo Kim -
2017 Spotlight: Maximizing Subset Accuracy with Recurrent Neural Networks in Multi-label Classification »
Jinseok Nam · Eneldo Loza Mencía · Hyunwoo J Kim · Johannes Fürnkranz -
2017 Poster: Maximizing Subset Accuracy with Recurrent Neural Networks in Multi-label Classification »
Jinseok Nam · Eneldo Loza Mencía · Hyunwoo J Kim · Johannes Fürnkranz