Skip to yearly menu bar Skip to main content


Poster

Computation Tree: A Transferable Pattern Towards Graph Foundation Models

Zehong Wang · Zheyuan Zhang · Nitesh Chawla · Chuxu Zhang · Yanfang Ye

[ ]
Thu 12 Dec 11 a.m. PST — 2 p.m. PST

Abstract:

Inspired by the success of foundation models in applications such as ChatGPT, as graph data has been ubiquitous, one can envision the far-reaching impacts that can be brought by Graph Foundation Models (GFMs) with broader applications in the areas such as scientific research, social network analysis, drug discovery, and e-commerce. Despite the significant progress of pre-trained graph neural networks, there haven’t been GFMs that can achieve desired performance on various graph-learning-related tasks. Building GFMs may rely on a vocabulary that encodes transferable patterns shared among different tasks and domains. Unlike image and text, defining such transferable patterns for graphs remains an open question. In this paper, we aim to bridge this gap by rethinking the transferable patterns on graphs as computation trees -- i.e., subtree structures derived from the message-passing process. Based on this insight, we propose a cross-task, cross-domain graph foundation model named GFT, short for Graph Foundation model with Tree vocabulary. By leveraging computation trees to define tokens within the transferable vocabulary, GFT improves model generalization and reduces the risk of negative transfer. The theoretical analyses and extensive experimental studies have demonstrated the transferability of computation trees and shown the effectiveness of GFT across diverse tasks and domains in graph learning.

Live content is unavailable. Log in and register to view live content