Skip to yearly menu bar Skip to main content


Poster

Explicitly disentangling image content from translation and rotation with spatial-VAE

Tristan Bepler · Ellen Zhong · Kotaro Kelley · Edward Brignole · Bonnie Berger

East Exhibition Hall B + C #89

Keywords: [ Deep Autoencoders ] [ Applications -> Computational Biology and Bioinformatics; Applications -> Computer Vision; Deep Learning ] [ Algorithms ] [ Representation Learning ]


Abstract:

Given an image dataset, we are often interested in finding data generative factors that encode semantic content independently from pose variables such as rotation and translation. However, current disentanglement approaches do not impose any specific structure on the learned latent representations. We propose a method for explicitly disentangling image rotation and translation from other unstructured latent factors in a variational autoencoder (VAE) framework. By formulating the generative model as a function of the spatial coordinate, we make the reconstruction error differentiable with respect to latent translation and rotation parameters. This formulation allows us to train a neural network to perform approximate inference on these latent variables while explicitly constraining them to only represent rotation and translation. We demonstrate that this framework, termed spatial-VAE, effectively learns latent representations that disentangle image rotation and translation from content and improves reconstruction over standard VAEs on several benchmark datasets, including applications to modeling continuous 2-D views of proteins from single particle electron microscopy and galaxies in astronomical images.

Live content is unavailable. Log in and register to view live content