Skip to yearly menu bar Skip to main content


Poster

Does Video-Text Pretraining Help Open-Vocabulary Online Action Detection?

qingsong zhao · Yi Wang · Jilan Xu · Yinan He · Zifan Song · Limin Wang · Yu Qiao · Cairong Zhao

East Exhibit Hall A-C #1808
[ ] [ Project Page ]
Fri 13 Dec 11 a.m. PST — 2 p.m. PST

Abstract:

Video understanding relies on accurate action detection for temporal analysis. However, existing mainstream methods have limitations in real-world applications due to their offline and closed-set evaluation approaches, as well as their dependence on manual annotations. To address these challenges and enable real-time action understanding in open-world scenarios, we propose OV-OAD, a zero-shot online action detector that leverages vision-language models and learns solely from text supervision. By introducing an object-centered decoder unit into a Transformer-based model, we aggregate frames with similar semantics using video-text correspondence. Extensive experiments on two action detection benchmarks, THUMOS'14 and TVSeries, demonstrate that OV-OAD outperforms other advanced zero-shot methods, achieving 37.5\% mean average precision and 73.8\% calibrated average precision, respectively. Our work establishes a robust baseline for zero-shot transfer in online action detection, enabling scalable solutions for open-world temporal understanding.

Live content is unavailable. Log in and register to view live content