Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Bayesian Deep Learning

Relaxed-Responsibility Hierarchical Discrete VAEs

Matthew Willetts · Xenia Miscouridou · Stephen J Roberts · Chris C Holmes


Abstract:

Successfully training Variational Autoencoders (VAEs) with a hierarchy of discrete latent variables remains an area of active research. Vector-Quantised VAEs are a powerful approach to discrete VAEs, but naive hierarchical extensions can be unstable when training. Leveraging insights from classical methods of inference we introduce Relaxed-Responsibility Vector-Quantisation, a novel way to parameterise discrete latent variables, a refinement of relaxed Vector-Quantisation that gives better performance and more stable training. This enables a novel approach to hierarchical discrete variational autoencoders with numerous layers of latent variables (here up to 32) that we train end-to-end. Within hierarchical probabilistic deep generative models with discrete latent variables trained end-to-end, we achieve state-of-the-art bits-per-dim results for various standard datasets.

Chat is not available.