Skip to yearly menu bar Skip to main content


Oral
in
Workshop: Deep Generative Models and Downstream Applications

Deep Generative model with Hierarchical Latent Factors for Timeseries Anomaly Detection

Cristian Challu · Peihong Jiang · Ying Nian Wu · Laurent Callot


Abstract:

Multivariate time-series anomaly detection has become an active area of research in recent years, with Deep Learning models outperforming previous approaches on benchmark datasets. Among reconstruction-based models, almost all previous work has focused on Variational Autoencoders and Generative Adversarial Networks. This work presents DGHL, a new family of generative models for time-series anomaly detection, trained by maximizing the observed likelihood directly by posterior sampling and alternating gradient-descent. A top-down Convolution Network maps time-series windows to a novel hierarchical latent space, exploiting temporal dynamics to encode information efficiently. Despite relying on posterior sampling, it is computationally more efficient than current approaches, with up to 10x shorter training times than RNN based models. Our method outperformed other state-of-the-art models on four popular benchmark datasets. Finally, DGHL is robust to variable features between entities and accurate even with large proportions of missing values, settings with increasing relevance with IoT. We demonstrate the superior robustness of DGHL with novel occlusion experiments in this literature.

Chat is not available.