Skip to yearly menu bar Skip to main content


Poster

Leveraging the Exact Likelihood of Deep Latent Variable Models

Pierre-Alexandre Mattei · Jes Frellsen

Room 210 #5

Keywords: [ Variational Inference ] [ Generative Models ] [ Latent Variable Models ] [ Missing Data ]


Abstract:

Deep latent variable models (DLVMs) combine the approximation abilities of deep neural networks and the statistical foundations of generative models. Variational methods are commonly used for inference; however, the exact likelihood of these models has been largely overlooked. The purpose of this work is to study the general properties of this quantity and to show how they can be leveraged in practice. We focus on important inferential problems that rely on the likelihood: estimation and missing data imputation. First, we investigate maximum likelihood estimation for DLVMs: in particular, we show that most unconstrained models used for continuous data have an unbounded likelihood function. This problematic behaviour is demonstrated to be a source of mode collapse. We also show how to ensure the existence of maximum likelihood estimates, and draw useful connections with nonparametric mixture models. Finally, we describe an algorithm for missing data imputation using the exact conditional likelihood of a DLVM. On several data sets, our algorithm consistently and significantly outperforms the usual imputation scheme used for DLVMs.

Live content is unavailable. Log in and register to view live content