Timezone: »

Unconstrained Monotonic Neural Networks
Antoine Wehenkel · Gilles Louppe

Tue Dec 10 10:45 AM -- 12:45 PM (PST) @ East Exhibition Hall B + C #54

Monotonic neural networks have recently been proposed as a way to define invertible transformations. These transformations can be combined into powerful autoregressive flows that have been shown to be universal approximators of continuous probability distributions. Architectures that ensure monotonicity typically enforce constraints on weights and activation functions, which enables invertibility but leads to a cap on the expressiveness of the resulting transformations. In this work, we propose the Unconstrained Monotonic Neural Network (UMNN) architecture based on the insight that a function is monotonic as long as its derivative is strictly positive. In particular, this latter condition can be enforced with a free-form neural network whose only constraint is the positiveness of its output. We evaluate our new invertible building block within a new autoregressive flow (UMNN-MAF) and demonstrate its effectiveness on density estimation experiments. We also illustrate the ability of UMNNs to improve variational inference.

Author Information

Antoine Wehenkel (ULiège)

I am currently a Phd student in machine learning at ULiège (Belgium) under the supervision of Professor Gilles Louppe. In 2018, I was graduated a Msc in computer engineering from ULiège. I spent my last year of study at Ecole Polytechnique Fédérale de Lausanne (EPFL) as an exchange student. There, I realized my master's thesis about line parameters estimation of electrical distribution network in the laboratory of Professor Jean-Yves Le Boudec. My main research interests revolve around Statistics, Machine Learning and Information Theory.

Gilles Louppe (University of Liège)

More from the Same Authors