Skip to yearly menu bar Skip to main content


Poster

Unity by Diversity: Improved Representation Learning for Multimodal VAEs

Thomas Sutter · Yang Meng · Andrea Agostini · Daphné Chopard · Norbert Fortin · Julia Vogt · Babak Shahbaba · Stephan Mandt


Abstract:

Variational Autoencoders for multimodal data hold promise for many tasks in data analysis, such as representation learning, conditional generation, and imputation.Current architectures either share the encoder output, decoder input, or both across modalities to learn a shared representation. Such architectures impose hard constraints on the model. In this work, we show that a better latent representation can be obtained by replacing these hard constraints with a soft constraint. We propose a new mixture-of-experts prior, softly guiding each modality's latent representation towards a shared aggregate posterior.This approach results in a superior latent representation and allows each encoding to preserve information better from its uncompressed original features. In extensive experiments on multiple benchmark datasets and two challenging real-world datasets, we show improved learned latent representations and imputation of missing data modalities compared to existing methods.

Live content is unavailable. Log in and register to view live content