Poster
SF-V: Single Forward Video Generation Model
Zhixing Zhang · Yanyu Li · Yushu Wu · yanwu xu · Anil Kag · Ivan Skorokhodov · Willi Menapace · Aliaksandr Siarohin · Junli Cao · Dimitris Metaxas · Sergey Tulyakov · Jian Ren
East Exhibit Hall A-C #1703
Abstract:
Diffusion-based video generation models have demonstrated remarkable success in obtaining high-fidelity videos through the iterative denoising process. However, these models require multiple denoising steps during sampling, resulting in high computational costs. In this work, we propose a novel approach to obtain single-step video generation models by leveraging adversarial training to fine-tune pre-trained video diffusion models. We show that, through the adversarial training, the multi-steps video diffusion model, i.e., Stable Video Diffusion (SVD), can be trained to perform single forward pass to synthesize high-quality videos, capturing both temporal and spatial dependencies in the video data. Extensive experiments demonstrate that our method achieves competitive generation quality of synthesized videos with significantly reduced computational overhead for the denoising process (i.e., around $23\times$ speedup compared with SVD and $6\times$ speedup compared with existing works, with even better generation quality), paving the way for real-time video synthesis and editing.
Live content is unavailable. Log in and register to view live content