Timezone: »

 
Poster
Generating Long Videos of Dynamic Scenes
Tim Brooks · Janne Hellsten · Miika Aittala · Ting-Chun Wang · Timo Aila · Jaakko Lehtinen · Ming-Yu Liu · Alexei Efros · Tero Karras

Wed Nov 30 09:00 AM -- 11:00 AM (PST) @ Hall J #342

We present a video generation model that accurately reproduces object motion, changes in camera viewpoint, and new content that arises over time. Existing video generation methods often fail to produce new content as a function of time while maintaining consistencies expected in real environments, such as plausible dynamics and object persistence. A common failure case is for content to never change due to over-reliance on inductive bias to provide temporal consistency, such as a single latent code that dictates content for the entire video. On the other extreme, without long-term consistency, generated videos may morph unrealistically between different scenes. To address these limitations, we prioritize the time axis by redesigning the temporal latent representation and learning long-term consistency from data by training on longer videos. We leverage a two-phase training strategy, where we separately train using longer videos at a low resolution and shorter videos at a high resolution. To evaluate the capabilities of our model, we introduce two new benchmark datasets with explicit focus on long-term temporal dynamics.

Author Information

Tim Brooks (UC Berkeley)
Janne Hellsten (NVIDIA)
Miika Aittala (NVIDIA)
Ting-Chun Wang (NVIDIA)
Timo Aila (NVIDIA)
Jaakko Lehtinen (Aalto University & NVIDIA)
Ming-Yu Liu (NVIDIA)
Alexei Efros (UC Berkeley)
Tero Karras (NVIDIA)

More from the Same Authors