Timezone: »
We tackle a challenging blind image denoising problem, in which only single distinct noisy images are available for training a denoiser, and no information about noise is known, except for it being zero-mean, additive, and independent of the clean image. In such a setting, which often occurs in practice, it is not possible to train a denoiser with the standard discriminative training or with the recently developed Noise2Noise (N2N) training; the former requires the underlying clean image for the given noisy image, and the latter requires two independently realized noisy image pair for a clean image. To that end, we propose GAN2GAN (Generated-Artificial-Noise to Generated-Artificial-Noise) method that first learns a generative model that can 1) simulate the noise in the given noisy images and 2) generate a rough, noisy estimates of the clean images, then 3) iteratively trains a denoiser with subsequently synthesized noisy image pairs (as in N2N), obtained from the generative model. In results, we show the denoiser trained with our GAN2GAN achieves an impressive denoising performance on both synthetic and real-world datasets for the blind denoising setting.
Author Information
Sungmin Cha (Sungkyunkwan University)
More from the Same Authors
-
2021 Poster: SSUL: Semantic Segmentation with Unknown Label for Exemplar-based Class-Incremental Learning »
Sungmin Cha · beomyoung kim · YoungJoon Yoo · Taesup Moon -
2020 Poster: Continual Learning with Node-Importance based Adaptive Group Sparse Regularization »
Sangwon Jung · Hongjoon Ahn · Sungmin Cha · Taesup Moon -
2019 Poster: Uncertainty-based Continual Learning with Adaptive Regularization »
Hongjoon Ahn · Sungmin Cha · Donggyu Lee · Taesup Moon