Timezone: »

Diffusion Models for Video Prediction and Infilling
Tobias Höppe · Arash Mehrjou · Stefan Bauer · Didrik Nielsen · Andrea Dittadi
Event URL: https://openreview.net/forum?id=k-A-KFG7GZB »

Video prediction and infilling require strong, temporally coherent generative capabilities. Diffusion models have shown remarkable success in several generative tasks, but have not been extensively explored in the video domain.We present Random-Mask Video Diffusion (RaMViD), which extends image diffusion models to videos using 3D convolutions, and introduces a new conditioning technique during training.By varying the mask we condition on, the model is able to perform video prediction, infilling, and upsampling. Due to our simple conditioning scheme, we can utilize the same architecture as used for unconditional training, which allows us to train the model in a conditional and unconditional fashion at the same time. We evaluate the model on two benchmark datasets for video prediction, on which we achieve state-of-the-art results, and one for video generation.

Author Information

Tobias Höppe (KTH Stockholm)
Arash Mehrjou (Max Planck Institute)
Stefan Bauer (Max Planck institute)
Didrik Nielsen (DTU Compute)
Andrea Dittadi (Technical University of Denmark)

More from the Same Authors