Timezone: »
Self-supervised learning of graph neural networks (GNNs) aims to learn an accurate representation of the graphs in an unsupervised manner, to obtain transferable representations of them for diverse downstream tasks. Predictive learning and contrastive learning are the two most prevalent approaches for graph self-supervised learning. However, they have their own drawbacks. While the predictive learning methods can learn the contextual relationships between neighboring nodes and edges, they cannot learn global graph-level similarities. Contrastive learning, while it can learn global graph-level similarities, its objective to maximize the similarity between two differently perturbed graphs may result in representations that cannot discriminate two similar graphs with different properties. To tackle such limitations, we propose a framework that aims to learn the exact discrepancy between the original and the perturbed graphs, coined as Discrepancy-based Self-supervised LeArning (D-SLA). Specifically, we create multiple perturbations of the given graph with varying degrees of similarity, and train the model to predict whether each graph is the original graph or the perturbed one. Moreover, we further aim to accurately capture the amount of discrepancy for each perturbed graph using the graph edit distance. We validate our D-SLA on various graph-related downstream tasks, including molecular property prediction, protein function prediction, and link prediction tasks, on which ours largely outperforms relevant baselines.
Author Information
Dongki Kim (Korea Advanced Institute of Science and Technology (KAIST))
Jinheon Baek (KAIST)
Sung Ju Hwang (KAIST, AITRICS)
More from the Same Authors
-
2021 Spotlight: Hardware-adaptive Efficient Latency Prediction for NAS via Meta-Learning »
Hayeon Lee · Sewoong Lee · Song Chong · Sung Ju Hwang -
2021 Spotlight: Task-Adaptive Neural Network Search with Meta-Contrastive Learning »
Wonyong Jeong · Hayeon Lee · Geon Park · Eunyoung Hyung · Jinheon Baek · Sung Ju Hwang -
2021 : Skill-based Meta-Reinforcement Learning »
Taewook Nam · Shao-Hua Sun · Karl Pertsch · Sung Ju Hwang · Joseph Lim -
2021 : Skill-based Meta-Reinforcement Learning »
Taewook Nam · Shao-Hua Sun · Karl Pertsch · Sung Ju Hwang · Joseph Lim -
2022 Poster: Learning to Generate Inversion-Resistant Model Explanations »
Hoyong Jeong · Suyoung Lee · Sung Ju Hwang · Sooel Son -
2022 : SPRINT: Scalable Semantic Policy Pre-training via Language Instruction Relabeling »
Jesse Zhang · Karl Pertsch · Jiahui Zhang · Taewook Nam · Sung Ju Hwang · Xiang Ren · Joseph Lim -
2022 : SPRINT: Scalable Semantic Policy Pre-training via Language Instruction Relabeling »
Jesse Zhang · Karl Pertsch · Jiahui Zhang · Taewook Nam · Sung Ju Hwang · Xiang Ren · Joseph Lim -
2022 Poster: Factorized-FL: Personalized Federated Learning with Parameter Factorization & Similarity Matching »
Wonyong Jeong · Sung Ju Hwang -
2022 Poster: Set-based Meta-Interpolation for Few-Task Meta-Learning »
Seanie Lee · Bruno Andreis · Kenji Kawaguchi · Juho Lee · Sung Ju Hwang -
2021 Poster: Edge Representation Learning with Hypergraphs »
Jaehyeong Jo · Jinheon Baek · Seul Lee · Dongki Kim · Minki Kang · Sung Ju Hwang -
2021 Poster: Hit and Lead Discovery with Explorative RL and Fragment-based Molecule Generation »
Soojung Yang · Doyeong Hwang · Seul Lee · Seongok Ryu · Sung Ju Hwang -
2021 Poster: Hardware-adaptive Efficient Latency Prediction for NAS via Meta-Learning »
Hayeon Lee · Sewoong Lee · Song Chong · Sung Ju Hwang -
2021 Poster: Task-Adaptive Neural Network Search with Meta-Contrastive Learning »
Wonyong Jeong · Hayeon Lee · Geon Park · Eunyoung Hyung · Jinheon Baek · Sung Ju Hwang -
2021 Poster: Mini-Batch Consistent Slot Set Encoder for Scalable Set Encoding »
Bruno Andreis · Jeffrey Willette · Juho Lee · Sung Ju Hwang -
2020 Poster: Learning to Extrapolate Knowledge: Transductive Few-shot Out-of-Graph Link Prediction »
Jinheon Baek · Dong Bok Lee · Sung Ju Hwang