Skip to yearly menu bar Skip to main content


Poster

InstructG2I: Synthesizing Images from Multimodal Attributed Graphs

Bowen Jin · Ziqi Pang · Bingjun Guo · Yu-Xiong Wang · Jiaxuan You · Jiawei Han

East Exhibit Hall A-C #1603
[ ] [ Project Page ]
Fri 13 Dec 11 a.m. PST — 2 p.m. PST

Abstract:

In this paper, we approach an overlooked yet critical task Graph2Image: generating images from multimodal attributed graphs (MMAGs). This task poses significant challenges due to the explosion in graph size, dependencies among graph entities, and the need for controllability in graph conditions. To address these challenges, we propose a graph context-conditioned diffusion model called InstructG2I. InstructG2I first exploits the graph structure and multimodal information to conduct informative neighbor sampling by combining personalized page rank and re-ranking based on vision-language features. Then, a graph QFormer encoder adaptively encodes the graph nodes into an auxiliary set of graph prompts to guide the denoising process of diffusion. Finally, we propose graph classifier-free guidance, enabling controllable generation by varying the strength of graph guidance and multiple connected edges to a node. Extensive experiments conducted on three datasets from different domains demonstrate the effectiveness and controllability of our approach. Code is available at https://anonymous.4open.science/r/Graph2Image-submit-607E/.

Live content is unavailable. Log in and register to view live content