Skip to yearly menu bar Skip to main content


Poster

Unifying Generation and Prediction on Graphs with Latent Graph Diffusion

Cai Zhou · Xiyuan Wang · Muhan Zhang

East Exhibit Hall A-C #3004
[ ]
Thu 12 Dec 11 a.m. PST — 2 p.m. PST

Abstract:

In this paper, we propose the first framework that enables solving graph learning tasks of all levels (node, edge and graph) and all types (generation, regression and classification) with one model. We first formulate prediction tasks including regression and classification as (conditional) generation, which is a generic formulation that enables diffusion models to perform deterministic tasks with provable guarantees. We then propose Latent Graph Diffusion (LGD), a generative model that can generate node, edge, and graph-level features of all categories simultaneously. We achieve this goal by embedding the graph structures and features into a latent space leveraging a powerful encoder which can also be decoded, then training a diffusion model in the latent space. LGD is also capable of conditional generation through a specifically designed cross-attention mechanism. Leveraging LGD and the unified ``all tasks as generation'' formulation, our framework is capable of solving tasks of all levels and all types. We verify the effectiveness of our framework with extensive experiments, where our models achieve state-of-the-art or highly competitive results across a wide range of generation and regression tasks.

Live content is unavailable. Log in and register to view live content