Skip to yearly menu bar Skip to main content


Poster

Warped Diffusion: Solving Video Inverse Problems with Image Diffusion Models

Giannis Daras · Weili Nie · Karsten Kreis · Alex Dimakis · Morteza Mardani · Nikola Kovachki · Arash Vahdat

East Exhibit Hall A-C #2707
[ ]
Wed 11 Dec 11 a.m. PST — 2 p.m. PST

Abstract: Using image models naively for solving inverse video problems often suffers from flickering, texture-sticking, and temporal inconsistency in generated videos. To tackle these problems, in this paper, we view frames as continuous functions in the 2D space, and videos as a sequence of continuous warping transformations between different frames. This perspective allows us to train function space diffusion models only on *images* and utilize them to solve temporally correlated inverse problems. The function space diffusion models need to be equivariant to the underlying spatial transformations. To ensure temporal consistency, we introduce a simple post-hoc test-time guidance towards (self)-equivariant solutions. Our method allows us to deploy state-of-the-art latent diffusion models such as Stable Diffusion XL to solve video inverse problems. We demonstrate the effectiveness of our method for video inpainting and $8\times$ video super-resolution, outperforming existing techniques based on noise transformations. We provide generated video results in the following (anonymized) URL: https://anonneurips2024.github.io/

Live content is unavailable. Log in and register to view live content