Skip to yearly menu bar Skip to main content


Poster

Progressive Cross-Scale Self-Supervised Blind Image Deconvolution via Implicit Neural Representation

Tianjing Zhang · Yuhui Quan · Hui Ji

[ ]
Thu 12 Dec 4:30 p.m. PST — 7:30 p.m. PST

Abstract:

Blind image deconvolution (BID) is an important yet challenging image recovery problem. Most existing deep learning-based BID methods require supervised training with ground truth (GT) images. This paper introduces a self-supervised method for BID that does not require GT images. The key challenge is to regularize the training to prevent over-fitting due to the absence of GT images. By leveraging an exact relationship among the blurred image, latent image, and blur kernel across consecutive scales, we propose an effective cross-scale consistency loss. This is implemented by representing the image and kernel with implicit neural representations (INRs), whose resolution-free property enables consistent yet efficient computation for network training at multiple scales. Combined with a progressively coarse-to-fine training scheme, the proposed method significantly outperforms existing self-supervised methods on several datasets in extensive experiments.

Live content is unavailable. Log in and register to view live content