Skip to yearly menu bar Skip to main content


Poster

Fast and Memory-Efficient Video Diffusion Using Streamlined Inference

Zheng Zhan · Yushu Wu · Yifan Gong · Zichong Meng · Zhenglun Kong · Changdi Yang · Geng Yuan · Pu Zhao · Wei Niu · Yanzhi Wang

[ ]
Thu 12 Dec 11 a.m. PST — 2 p.m. PST

Abstract:

The rapid progress in artificial intelligence-generated content (AIGC), especially with diffusion models, has significantly advanced development of high-quality video generation. However, current video diffusion models exhibit demanding computational requirements and high peak memory usage, especially for generating longer and higher-resolution videos. These limitations greatly hinder the practical application of video diffusion models on standard hardware platforms. To tackle this issue, we present a novel, training-free framework named Streamlined Inference, which leverages the temporal and spatial properties of video diffusion models. Our approach integrates three core components: Feature Slicer, Operator Grouping, and Step Rehash. Specifically, Feature Slicer effectively partitions input features into sub-features and Operator Grouping processes each sub-feature with a group of consecutive operators, resulting in significant memory reduction without sacrificing the quality or speed. Step Rehash further exploits the similarity between adjacent steps in diffusion, and accelerates inference through skipping unnecessary steps. Extensive experiments demonstrate that our approach significantly reduces peak memory and computational overhead, making it feasible to generate high-quality videos on a single consumer GPU (e.g., reducing peak memory of Animatediff from 42GB to 11GB, featuring faster inference on 2080Ti).

Live content is unavailable. Log in and register to view live content