Poster

Statistical Regeneration Guarantees of the Wasserstein Autoencoder with Latent Space Consistency

Anish Chakrabarty · Swagatam Das

Keywords: [ Theory ] [ Generative Model ] [ Optimal Transport ] [ Representation Learning ]

[ Abstract ]
[ OpenReview
Tue 7 Dec 8:30 a.m. PST — 10 a.m. PST
 
Spotlight presentation:

Abstract:

The introduction of Variational Autoencoders (VAE) has been marked as a breakthrough in the history of representation learning models. Besides having several accolades of its own, VAE has successfully flagged off a series of inventions in the form of its immediate successors. Wasserstein Autoencoder (WAE), being an heir to that realm carries with it all of the goodness and heightened generative promises, matching even the generative adversarial networks (GANs). Needless to say, recent years have witnessed a remarkable resurgence in statistical analyses of the GANs. Similar examinations for Autoencoders however, despite their diverse applicability and notable empirical performance, remain largely absent. To close this gap, in this paper, we investigate the statistical properties of WAE. Firstly, we provide statistical guarantees that WAE achieves the target distribution in the latent space, utilizing the Vapnik–Chervonenkis (VC) theory. The main result, consequently ensures the regeneration of the input distribution, harnessing the potential offered by Optimal Transport of measures under the Wasserstein metric. This study, in turn, hints at the class of distributions WAE can reconstruct after suffering a compression in the form of a latent law.

Chat is not available.