Skip to yearly menu bar Skip to main content


Poster

AverNet: All-in-one Video Restoration for Time-varying Unknown Degradations

Haiyu Zhao · Lei Tian · Xinyan Xiao · Peng Hu · Yuanbiao Gou · Xi Peng

East Exhibit Hall A-C #1406
[ ]
Wed 11 Dec 11 a.m. PST — 2 p.m. PST

Abstract:

Traditional video restoration approaches were designed to recover clean videos from a specific and predefined type of degradation, making them ineffective in handling multiple unknown types of degradation. To address this issue, several studies have been conducted and have shown promising results. However, these studies overlook that the degradations in video usually change over time, dubbed time-varying unknown degradations (TUD). To tackle such a less-touched challenge, we propose an innovative method, termed as All-in-one VidEo Restoration Network (AverNet), which comprises two core modules, \ie, Prompt-Guided Alignment (PGA) module and Prompt-Conditioned Enhancement (PCE) module. Specifically, PGA addresses the issue of pixel shifts caused by time-varying degradations by learning and utilizing prompts to align video frames at the pixel level. To handle multiple unknown degradations, PCE recasts it into a conditional restoration problem by implicitly establishing a conditional map between degradations and ground truths. Thanks to the collaboration between PGA and PCE modules, AverNet empirically demonstrates its effectiveness in recovering videos from TUD. Extensive experiments are carried out on two synthesized datasets featuring seven types of degradations with random corruption levels.

Live content is unavailable. Log in and register to view live content