Timezone: »

Deep Variational Semi-Supervised Novelty Detection
Tal Daniel · Thanard Kurutach · Aviv Tamar
Event URL: https://openreview.net/forum?id=pyEsZ1jQXw »

In anomaly detection (AD), one seeks to identify whether a test sample is abnormal, given a data set of normal samples. A recent and promising approach to AD relies on deep generative models, such as variational autoencoders (VAEs), for unsupervised learning of the normal data distribution. In semi-supervised AD (SSAD), the data also includes a small sample of labeled anomalies. In this work, we propose two variational methods for training VAEs for SSAD. The intuitive idea in both methods is to train the encoder to `separate' between latent vectors for normal and outlier data. We show that this idea can be derived from principled probabilistic formulations of the problem, and propose simple and effective algorithms. Our methods can be applied to various data types, as we demonstrate on SSAD datasets ranging from natural images to astronomy and medicine, can be combined with any VAE model architecture, and are naturally compatible with ensembling. When comparing to state-of-the-art SSAD methods that are not specific to particular data types, we obtain marked improvement in outlier detection.

Author Information

Tal Daniel (Technion)
Thanard Kurutach (University of California Berkeley)
Aviv Tamar (Technion)

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors