Skip to yearly menu bar Skip to main content


Poster

Phased Consistency Models

Fu-Yun Wang · Zhaoyang Huang · Alexander Bergman · Dazhong Shen · Peng Gao · Michael Lingelbach · Keqiang Sun · Weikang Bian · Guanglu Song · Yu Liu · Xiaogang Wang · Hongsheng Li

East Exhibit Hall A-C #2810
[ ] [ Project Page ]
Fri 13 Dec 11 a.m. PST — 2 p.m. PST

Abstract:

Consistency Models (CMs) have made significant progress in accelerating the generation of diffusion models. However, their application to high-resolution, text-conditioned image generation in the latent space remains unsatisfactory. In this paper, we identify three key flaws in the current design of Latent Consistency Models~(LCMs). We investigate the reasons behind these limitations and propose Phased Consistency Models (PCMs), which generalize the design space and address the identified limitations. Our evaluations demonstrate that PCMs outperform LCMs across 1--16 step generation settings. While PCMs are specifically designed for multi-step refinement, they achieve comparable 1-step generation results to previously state-of-the-art specifically designed 1-step methods. Furthermore, we show the methodology of PCMs is versatile and applicable to video generation, enabling us to train the state-of-the-art few-step text-to-video generator. Our code is available at https://github.com/G-U-N/Phased-Consistency-Model.

Live content is unavailable. Log in and register to view live content