Poster
Pseudo-Siamese Directional Transformers for Self-Supervised Real-World Denoising
Yuhui Quan · Tianxiang Zheng · Hui Ji
East Exhibit Hall A-C #1200
Real-world image denoising remains a challenge task. This paper studies self-supervised image denoising, requiring only noisy images captured in a single shot. We revamped the blind-spot technique by leveraging the transformer's capability for long-range pixel interactions, which is crucial for effectively removing noise dependence in relating pixel--a requirement for achieving great performance for the blind-spot technique. The proposed method integrates these elements with two key innovations: a directional self-attention (DSA) module using a half-plane grid for self-attention, creating a sophisticated blind-spot structure, and a Siamese architecture with mutual learning to mitigate the performance impacts from the restricted attention grid in DSA. Experiments on benchmark datasets demonstrate that our method outperforms existing self-supervised and clean-image-free methods. This combination of blind-spot and transformer techniques provides a natural synergy for tackling real-world image denoising challenges.
Live content is unavailable. Log in and register to view live content