Skip to yearly menu bar Skip to main content


Poster

HAWK: Learning to Understand Open-World Video Anomalies

Jiaqi Tang · Hao LU · RUIZHENG WU · Xiaogang Xu · Ke Ma · Cheng Fang · Bin Guo · Jiangbo Lu · Qifeng Chen · Yingcong Chen

East Exhibit Hall A-C #3801
[ ]
Wed 11 Dec 11 a.m. PST — 2 p.m. PST

Abstract:

Video Anomaly Detection (VAD) systems can autonomously monitor and identify disturbances, reducing the need for manual labor and associated costs. However, current VAD systems are often limited by their superficial semantic understanding of scenes and minimal user interaction. Additionally, the prevalent data scarcity in existing datasets restricts their applicability in open-world scenarios.In this paper, we introduce HAWK, a novel framework that leverages interactive large Visual Language Models (VLM) to interpret video anomalies precisely. Recognizing the difference in motion information between abnormal and normal videos, HAWK explicitly integrates motion modality to enhance anomaly identification. To reinforce motion attention, we construct an auxiliary consistency loss within the motion and video space, guiding the video branch to focus on the motion modality. Moreover, to improve the interpretation of motion-to-language, we establish a clear supervisory relationship between motion and its linguistic representation. Furthermore, we have annotated over 8,000 anomaly videos with language descriptions, enabling effective training across diverse open-world scenarios, and also created 8,000 question-answering pairs for users' open-world questions. The final results demonstrate that HAWK achieves SOTA performance, surpassing existing baselines in both video description generation and question-answering. Our codes/dataset/demo will be released at https://github.com/jqtangust/hawk.

Live content is unavailable. Log in and register to view live content