Skip to yearly menu bar Skip to main content


Poster

Transfer Anomaly Detection by Inferring Latent Domain Representations

Atsutoshi Kumagai · Tomoharu Iwata · Yasuhiro Fujiwara

East Exhibition Hall B, C #45

Keywords: [ Algorithms ] [ Multitask and Transfer Learning ] [ Classification ]


Abstract:

We propose a method to improve the anomaly detection performance on target domains by transferring knowledge on related domains. Although anomaly labels are valuable to learn anomaly detectors, they are difficult to obtain due to their rarity. To alleviate this problem, existing methods use anomalous and normal instances in the related domains as well as target normal instances. These methods require training on each target domain. However, this requirement can be problematic in some situations due to the high computational cost of training. The proposed method can infer the anomaly detectors for target domains without re-training by introducing the concept of latent domain vectors, which are latent representations of the domains and are used for inferring the anomaly detectors. The latent domain vector for each domain is inferred from the set of normal instances in the domain. The anomaly score function for each domain is modeled on the basis of autoencoders, and its domain-specific property is controlled by the latent domain vector. The anomaly score function for each domain is trained so that the scores of normal instances become low and the scores of anomalies become higher than those of the normal instances, while considering the uncertainty of the latent domain vectors. When target normal instances can be used during training, the proposed method can also use them for training in a unified framework. The effectiveness of the proposed method is demonstrated through experiments using one synthetic and four real-world datasets. Especially, the proposed method without re-training outperforms existing methods with target specific training.

Chat is not available.