`

Timezone: »

 
Poster
Contrastively Disentangled Sequential Variational Autoencoder
Junwen Bai · Weiran Wang · Carla Gomes

Thu Dec 09 04:30 PM -- 06:00 PM (PST) @

Self-supervised disentangled representation learning is a critical task in sequence modeling. The learnt representations contribute to better model interpretability as well as the data generation, and improve the sample efficiency for downstream tasks. We propose a novel sequence representation learning method, named Contrastively Disentangled Sequential Variational Autoencoder (C-DSVAE), to extract and separate the static (time-invariant) and dynamic (time-variant) factors in the latent space. Different from previous sequential variational autoencoder methods, we use a novel evidence lower bound which maximizes the mutual information between the input and the latent factors, while penalizes the mutual information between the static and dynamic factors. We leverage contrastive estimations of the mutual information terms in training, together with simple yet effective augmentation techniques, to introduce additional inductive biases. Our experiments show that C-DSVAE significantly outperforms the previous state-of-the-art methods on multiple metrics.

Author Information

Junwen Bai (Cornell University)
Weiran Wang (Google)
Carla Gomes (Cornell University)

More from the Same Authors