Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Learning-Based Solutions for Inverse Problems

What’s in a Prior? Learned Proximal Networks for Inverse Problems

Zhenghan Fang · Sam Buchanan · Jeremias Sulam

Keywords: [ Input convex neural networks ] [ Convergent PnP ] [ Plug-and-play ] [ Inverse Problems ] [ Explicit regularizer ] [ Proximal operators ]


Abstract:

Proximal operators are ubiquitous in inverse problems, commonly appearing as part of algorithmic strategies to regularize problems that are otherwise ill-posed. Modern deep learning models have been brought to bear for these tasks too, as in the framework of plug-and-play or deep unrolling, where they loosely resemble proximal operators. Yet, these do not provide any guarantee that these general functions, implemented by neural networks, provide a proximal operator of some function, nor do they provide any characterization of the function of which they provide some approximate proximal. Herein we provide a framework to develop learned proximal networks (LPN), which provide exact proximal operators for a data-driven regularizer, and show how a new training strategy, dubbed proximal matching, guarantees that the obtained regularizer recovers the log-prior of the true data distribution. Thus, such LPN provide general, unsupervised, proximal operators that can be used for general inverse problems. We illustrate our results in a series of cases of increasing complexity, demonstrating that these models not only result in state-of-the-art restoration results, but provide a window into the resulting priors learned from data.

Chat is not available.