Skip to yearly menu bar Skip to main content


GLOBER: Coherent Non-autoregressive Video Generation via GLOBal Guided Video DecodER

Mingzhen Sun · Weining Wang · Zihan Qin · Jiahui Sun · Sihan Chen · Jing Liu

Great Hall & Hall B1+B2 (level 1) #602
[ ]
Wed 13 Dec 3 p.m. PST — 5 p.m. PST


Video generation necessitates both global coherence and local realism. This work presents a novel non-autoregressive method GLOBER, which first generates global features to obtain comprehensive global guidance and then synthesizes video frames based on the global features to generate coherent videos. Specifically, we propose a video auto-encoder, where a video encoder encodes videos into global features, and a video decoder, built on a diffusion model, decodes the global features and synthesizes video frames in a non-autoregressive manner. To achieve maximum flexibility, our video decoder perceives temporal information through normalized frame indexes, which enables it to synthesize arbitrary sub video clips with predetermined starting and ending frame indexes. Moreover, a novel adversarial loss is introduced to improve the global coherence and local realism between the synthesized video frames. Finally, we employ a diffusion-based video generator to fit the global features outputted by the video encoder for video generation. Extensive experimental results demonstrate the effectiveness and efficiency of our proposed method, and new state-of-the-art results have been achieved on multiple benchmarks.

Chat is not available.