Skip to yearly menu bar Skip to main content


Poster

Masked Pre-training Enables Universal Zero-shot Denoiser

Xiaoxiao Ma · Zhixiang Wei · Yi Jin · Pengyang Ling · Tianle Liu · Ben Wang · Junkang Dai · Huaian Chen

East Exhibit Hall A-C #1402
[ ] [ Project Page ]
Fri 13 Dec 4:30 p.m. PST — 7:30 p.m. PST

Abstract: In this work, we observe that model trained on vast general images via masking strategy, has been naturally embedded with their distribution knowledge, thus spontaneously attains the underlying potential for strong image denoising.Based on this observation, we propose a novel zero-shot denoising paradigm, i.e., $\textbf{M}$asked $\textbf{P}$re-train then $\textbf{I}$terative fill ($\textbf{MPI}$).MPI first trains model via masking and then employs pre-trained weight for high-quality zero-shot image denoising on a single noisy image.Concretely, MPI comprises two key procedures:$\textbf{1) Masked Pre-training}$ involves training model to reconstruct massive natural images with random masking for generalizable representations, gathering the potential for valid zero-shot denoising on images with varying noise degradation and even in distinct image types.$\textbf{2) Iterative filling}$ exploits pre-trained knowledge for effective zero-shot denoising. It iteratively optimizes the image by leveraging pre-trained weights, focusing on alternate reconstruction of different image parts, and gradually assembles fully denoised image within limited number of iterations.Comprehensive experiments across various noisy scenarios underscore the notable advances of MPI over previous approaches with a marked reduction in inference time.

Live content is unavailable. Log in and register to view live content