`

Timezone: »

 
Poster
Adaptive Denoising via GainTuning
Sreyas Mohan · Joshua L Vincent · Ramon Manzorro · Peter Crozier · Carlos Fernandez-Granda · Eero P Simoncelli

Tue Dec 07 04:30 PM -- 06:00 PM (PST) @ None #None

Deep convolutional neural networks (CNNs) for image denoising are typically trained on large datasets. These models achieve the current state of the art, but they do not generalize well to data that deviate from the training distribution. Recent work has shown that it is possible to train denoisers on a single noisy image. These models adapt to the features of the test image, but their performance is limited by the small amount of information used to train them. Here we propose "GainTuning'', a methodology by which CNN models pre-trained on large datasets can be adaptively and selectively adjusted for individual test images. To avoid overfitting, GainTuning optimizes a single multiplicative scaling parameter (the “Gain”) of each channel in the convolutional layers of the CNN. We show that GainTuning improves state-of-the-art CNNs on standard image-denoising benchmarks, boosting their denoising performance on nearly every image in a held-out test set. These adaptive improvements are even more substantial for test images differing systematically from the training data, either in noise level or image type. We illustrate the potential of adaptive GainTuning in a scientific application to transmission-electron-microscope images, using a CNN that is pre-trained on synthetic data. In contrast to the existing methodology, GainTuning is able to faithfully reconstruct the structure of catalytic nanoparticles from these data at extremely low signal-to-noise ratios.

Author Information

Sreyas Mohan (NYU)
Joshua L Vincent (Arizona State University)
Ramon Manzorro (Universidad de Zaragoza)
Peter Crozier (Arizona State University)
Carlos Fernandez-Granda (NYU)
Eero P Simoncelli (FlatIron Institute / New York University)

More from the Same Authors

  • 2021 Poster: Convolutional Normalization: Improving Deep Convolutional Network Robustness and Training »
    Sheng Liu · Xiao Li · Yuexiang Zhai · Chong You · Zhihui Zhu · Carlos Fernandez-Granda · Qing Qu
  • 2021 Poster: Stochastic Solutions for Linear Inverse Problems using the Prior Implicit in a Denoiser »
    Zahra Kadkhodaie · Eero P Simoncelli
  • 2021 Poster: Impression learning: Online representation learning with synaptic plasticity »
    Colin Bredenberg · Benjamin Lyo · Eero P Simoncelli · Cristina Savin
  • 2020 Poster: Early-Learning Regularization Prevents Memorization of Noisy Labels »
    Sheng Liu · Jonathan Niles-Weed · Narges Razavian · Carlos Fernandez-Granda
  • 2019 : Poster Session »
    Jonathan Scarlett · Piotr Indyk · Ali Vakilian · Adrian Weller · Partha P Mitra · Benjamin Aubin · Bruno Loureiro · Florent Krzakala · Lenka Zdeborová · Kristina Monakhova · Joshua Yurtsever · Laura Waller · Hendrik Sommerhoff · Michael Moeller · Rushil Anirudh · Shuang Qiu · Xiaohan Wei · Zhuoran Yang · Jayaraman Thiagarajan · Salman Asif · Michael Gillhofer · Johannes Brandstetter · Sepp Hochreiter · Felix Petersen · Dhruv Patel · Assad Oberai · Akshay Kamath · Sushrut Karmalkar · Eric Price · Ali Ahmed · Zahra Kadkhodaie · Sreyas Mohan · Eero Simoncelli · Carlos Fernandez-Granda · Oscar Leong · Wesam Sakla · Rebecca Willett · Stephan Hoyer · Jascha Sohl-Dickstein · Samuel Greydanus · Gauri Jagatap · Chinmay Hegde · Michael Kellman · Jonathan Tamir · Nouamane Laanait · Ousmane Dia · Mirco Ravanelli · Jonathan Binas · Negar Rostamzadeh · Shirin Jalali · Tiantian Fang · Alex Schwing · Sébastien Lachapelle · Philippe Brouillard · Tristan Deleu · Simon Lacoste-Julien · Stella Yu · Arya Mazumdar · Ankit Singh Rawat · Yue Zhao · Jianshu Chen · Xiaoyang Li · Hubert Ramsauer · Gabrio Rizzuti · Nikolaos Mitsakos · Dingzhou Cao · Thomas Strohmer · Yang Li · Pei Peng · Gregory Ongie
  • 2019 Poster: Data-driven Estimation of Sinusoid Frequencies »
    Gautier Izacard · Sreyas Mohan · Carlos Fernandez-Granda