Timezone: »
Deep state-space models (DSSMs) enable temporal predictions by learning the underlying dynamics of observed sequence data. They are often trained by maximising the evidence lower bound. However, as we show, this does not ensure the model actually learns the underlying dynamics. We therefore propose a constrained optimisation framework as a general approach for training DSSMs. Building upon this, we introduce the extended Kalman VAE (EKVAE), which combines amortised variational inference with classic Bayesian filtering/smoothing to model dynamics more accurately than RNN-based DSSMs. Our results show that the constrained optimisation framework significantly improves system identification and prediction accuracy on the example of established state-of-the-art DSSMs. The EKVAE outperforms previous models w.r.t. prediction accuracy, achieves remarkable results in identifying dynamical systems, and can furthermore successfully learn state-space representations where static and dynamic features are disentangled.
Author Information
Alexej Klushyn (Technical University of Munich)
Richard Kurle (AWS AI Labs)
Maximilian Soelch (Technical University Munich)
Botond Cseke (Microsoft Research Cambridge)
Patrick van der Smagt (Volkswagen Group)
More from the Same Authors
-
2021 : On Symmetries in Variational Bayesian Neural Nets »
Richard Kurle · Tim Januschowski · Jan Gasthaus · Bernie Wang -
2021 Poster: Deep Explicit Duration Switching Models for Time Series »
Abdul Fatir Ansari · Konstantinos Benidis · Richard Kurle · Ali Caner Turkmen · Harold Soh · Alexander Smola · Bernie Wang · Tim Januschowski -
2020 Poster: Deep Rao-Blackwellised Particle Filters for Time Series Forecasting »
Richard Kurle · Syama Sundar Rangapuram · Emmanuel de Bézenac · Stephan Günnemann · Jan Gasthaus -
2020 Poster: Normalizing Kalman Filters for Multivariate Time Series Analysis »
Emmanuel de Bézenac · Syama Sundar Rangapuram · Konstantinos Benidis · Michael Bohlke-Schneider · Richard Kurle · Lorenzo Stella · Hilaf Hasson · Patrick Gallinari · Tim Januschowski -
2019 Poster: Learning Hierarchical Priors in VAEs »
Alexej Klushyn · Nutan Chen · Richard Kurle · Botond Cseke · Patrick van der Smagt -
2019 Spotlight: Learning Hierarchical Priors in VAEs »
Alexej Klushyn · Nutan Chen · Richard Kurle · Botond Cseke · Patrick van der Smagt -
2016 Poster: f-GAN: Training Generative Neural Samplers using Variational Divergence Minimization »
Sebastian Nowozin · Botond Cseke · Ryota Tomioka