Poster
SyncTweedies: A General Generative Framework Based on Synchronized Diffusions
Jaihoon Kim · Juil Koo · Kyeongmin Yeo · Minhyuk Sung
East Exhibit Hall A-C #2605
We introduce a general diffusion synchronization framework for generating diverse visual content, including ambiguous images, panorama images, mesh textures, and Gaussian splat textures, using a pretrained image diffusion model. We first present an analysis of various scenarios for synchronizing multiple diffusion processes through a canonical space. Based on the analysis, we introduce a novel synchronized diffusion method, SyncTweedies, which averages the outputs of Tweedie's formula while conducting denoising in multiple instance spaces. Compared to previous work that achieves synchronization through finetuning, SyncTweedies is a zero-shot method that does not require any finetuning, preserving the rich prior of diffusion models trained on Internet-scale image datasets without overfitting to specific domains. We verify that SyncTweedies offers the broadest applicability to diverse applications and superior performance compared to the previous state-of-the-art for each application.
Live content is unavailable. Log in and register to view live content