Skip to yearly menu bar Skip to main content


Poster

Hamiltonian Variational Auto-Encoder

Anthony Caterini · Arnaud Doucet · Dino Sejdinovic

Room 210 #5

Keywords: [ MCMC ] [ Variational Inference ] [ Deep Autoencoders ] [ Latent Variable Models ]


Abstract:

Variational Auto-Encoders (VAE) have become very popular techniques to perform inference and learning in latent variable models as they allow us to leverage the rich representational power of neural networks to obtain flexible approximations of the posterior of latent variables as well as tight evidence lower bounds (ELBO). Com- bined with stochastic variational inference, this provides a methodology scaling to large datasets. However, for this methodology to be practically efficient, it is neces- sary to obtain low-variance unbiased estimators of the ELBO and its gradients with respect to the parameters of interest. While the use of Markov chain Monte Carlo (MCMC) techniques such as Hamiltonian Monte Carlo (HMC) has been previously suggested to achieve this [23, 26], the proposed methods require specifying reverse kernels which have a large impact on performance. Additionally, the resulting unbiased estimator of the ELBO for most MCMC kernels is typically not amenable to the reparameterization trick. We show here how to optimally select reverse kernels in this setting and, by building upon Hamiltonian Importance Sampling (HIS) [17], we obtain a scheme that provides low-variance unbiased estimators of the ELBO and its gradients using the reparameterization trick. This allows us to develop a Hamiltonian Variational Auto-Encoder (HVAE). This method can be re-interpreted as a target-informed normalizing flow [20] which, within our context, only requires a few evaluations of the gradient of the sampled likelihood and trivial Jacobian calculations at each iteration.

Live content is unavailable. Log in and register to view live content