Timezone: »
We introduce a conceptually simple yet effective model for self-supervised representation learning with graph data. It follows the previous methods that generate two views of an input graph through data augmentation. However, unlike contrastive methods that focus on instance-level discrimination, we optimize an innovative feature-level objective inspired by classical Canonical Correlation Analysis. Compared with other works, our approach requires none of the parameterized mutual information estimator, additional projector, asymmetric structures, and most importantly, negative samples which can be costly. We show that the new objective essentially 1) aims at discarding augmentation-variant information by learning invariant representations, and 2) can prevent degenerated solutions by decorrelating features in different dimensions. Our theoretical analysis further provides an understanding for the new objective which can be equivalently seen as an instantiation of the Information Bottleneck Principle under the self-supervised setting. Despite its simplicity, our method performs competitively on seven public graph datasets.
Author Information
Hengrui Zhang (University of Illinois at Chicago)
Qitian Wu (Shanghai Jiao Tong University)
Junchi Yan (Shanghai Jiao Tong University)
David Wipf (Microsoft Research)
Philip S Yu (UIC)
More from the Same Authors
-
2021 Spotlight: On the Value of Infinite Gradients in Variational Autoencoder Models »
Bin Dai · Li Wenliang · David Wipf -
2021 Poster: A Bi-Level Framework for Learning to Solve Combinatorial Optimization on Graphs »
Runzhong Wang · Zhigang Hua · Gan Liu · Jiayi Zhang · Junchi Yan · Feng Qi · Shuang Yang · Jun Zhou · Xiaokang Yang -
2021 Poster: A Biased Graph Neural Network Sampler with Near-Optimal Regret »
Qingru Zhang · David Wipf · Quan Gan · Le Song -
2021 Poster: GRIN: Generative Relation and Intention Network for Multi-agent Trajectory Prediction »
Longyuan Li · Jian Yao · Li Wenliang · Tong He · Tianjun Xiao · Junchi Yan · David Wipf · Zheng Zhang -
2021 Poster: Towards Open-World Feature Extrapolation: An Inductive Graph Learning Approach »
Qitian Wu · Chenxiao Yang · Junchi Yan -
2021 Poster: Learning High-Precision Bounding Box for Rotated Object Detection via Kullback-Leibler Divergence »
Xue Yang · Xiaojiang Yang · Jirui Yang · Qi Ming · Wentao Wang · Qi Tian · Junchi Yan -
2021 Poster: Bridging Explicit and Implicit Deep Generative Models via Neural Stein Estimators »
Qitian Wu · Rui Gao · Hongyuan Zha -
2021 Poster: On Joint Learning for Solving Placement and Routing in Chip Design »
Ruoyu Cheng · Junchi Yan -
2021 Poster: On the Value of Infinite Gradients in Variational Autoencoder Models »
Bin Dai · Li Wenliang · David Wipf -
2020 : Broad Learning: A New Perspective on Mining Big Data »
Philip S Yu -
2019 Poster: Learning Latent Process from High-Dimensional Event Sequences via Efficient Sampling »
Qitian Wu · Zixuan Zhang · Xiaofeng Gao · Junchi Yan · Guihai Chen -
2017 Oral: From Bayesian Sparsity to Gated Recurrent Nets »
Hao He · Bo Xin · Satoshi Ikehata · David Wipf -
2017 Poster: From Bayesian Sparsity to Gated Recurrent Nets »
Hao He · Bo Xin · Satoshi Ikehata · David Wipf -
2017 Poster: PredRNN: Recurrent Neural Networks for Predictive Learning using Spatiotemporal LSTMs »
Yunbo Wang · Mingsheng Long · Jianmin Wang · Zhifeng Gao · Philip S Yu -
2017 Poster: Learning Multiple Tasks with Multilinear Relationship Networks »
Mingsheng Long · ZHANGJIE CAO · Jianmin Wang · Philip S Yu -
2016 Poster: A Pseudo-Bayesian Algorithm for Robust PCA »
Tae-Hyun Oh · Yasuyuki Matsushita · In So Kweon · David Wipf -
2016 Poster: Maximal Sparsity with Deep Networks? »
Bo Xin · Yizhou Wang · Wen Gao · David Wipf · Baoyuan Wang -
2013 Poster: Non-Uniform Camera Shake Removal Using a Spatially-Adaptive Sparse Penalty »
Haichao Zhang · David Wipf -
2013 Oral: Non-Uniform Camera Shake Removal Using a Spatially-Adaptive Sparse Penalty »
Haichao Zhang · David Wipf