Timezone: »

 
Poster
Pure Transformers are Powerful Graph Learners
Jinwoo Kim · Dat Nguyen · Seonwoo Min · Sungjun Cho · Moontae Lee · Honglak Lee · Seunghoon Hong

Thu Dec 01 09:00 AM -- 11:00 AM (PST) @ Hall J #438

We show that standard Transformers without graph-specific modifications can lead to promising results in graph learning both in theory and practice. Given a graph, we simply treat all nodes and edges as independent tokens, augment them with token embeddings, and feed them to a Transformer. With an appropriate choice of token embeddings, we prove that this approach is theoretically at least as expressive as an invariant graph network (2-IGN) composed of equivariant linear layers, which is already more expressive than all message-passing Graph Neural Networks (GNN). When trained on a large-scale graph dataset (PCQM4Mv2), our method coined Tokenized Graph Transformer (TokenGT) achieves significantly better results compared to GNN baselines and competitive results compared to Transformer variants with sophisticated graph-specific inductive bias. Our implementation is available at https://github.com/jw9730/tokengt.

Author Information

Jinwoo Kim (KAIST)
Dat Nguyen (Korea Advanced Institute of Science & Technology)
Seonwoo Min (Seoul National University)
Sungjun Cho (LG AI Research)
Moontae Lee (University of Illinois at Chicago)
Honglak Lee (LG AI Research / U. Michigan)
Seunghoon Hong (Korea Advanced Institute of Science and Technology)

More from the Same Authors