Skip to yearly menu bar Skip to main content


Poster

Continuous Hierarchical Representations with Poincaré Variational Auto-Encoders

Emile Mathieu · Charline Le Lan · Chris Maddison · Ryota Tomioka · Yee Whye Teh

East Exhibition Hall B + C #109

Keywords: [ Algorithms -> Nonlinear Dimensionality Reduction and Manifold Learning; Algorithms -> Representation Learning; Deep Learning ] [ Deep Learning ] [ Generative Models ]


Abstract:

The Variational Auto-Encoder (VAE) is a popular method for learning a generative model and embeddings of the data. Many real datasets are hierarchically structured. However, traditional VAEs map data in a Euclidean latent space which cannot efficiently embed tree-like structures. Hyperbolic spaces with negative curvature can. We therefore endow VAEs with a Poincaré ball model of hyperbolic geometry as a latent space and rigorously derive the necessary methods to work with two main Gaussian generalisations on that space. We empirically show better generalisation to unseen data than the Euclidean counterpart, and can qualitatively and quantitatively better recover hierarchical structures.

Live content is unavailable. Log in and register to view live content