Skip to yearly menu bar Skip to main content


Poster

CountGD: Multi-Modal Open-World Counting

Niki Amini-Naieni · Tengda Han · Andrew Zisserman

East Exhibit Hall A-C #3605
[ ] [ Project Page ]
Thu 12 Dec 4:30 p.m. PST — 7:30 p.m. PST

Abstract:

The goal of this paper is to improve the generality and accuracy of open-vocabulary object counting in images. To improve the generality, we repurpose an open-vocabulary detection foundation model (GroundingDINO) for the counting task, and also extend its capabilities by introducing modules to enable specifying the target object to count by visual exemplars. In turn, these new capabilities -- being able to specify the target object by multi-modalites (text and exemplars) -- lead to an improvement in counting accuracy.We make three contributions:First, we introduce the first open-world counting model, CountGD, where the prompt can be specified by a text description or visual exemplars or both;Second, we show that the performance of the model significantly improves the state of the art on multiple counting benchmarks -- when using text only, CountGD outperforms all previous text-only works, and when using both text and visual exemplars, we outperform all previous models;Third, we carry out a preliminary study into different interactions between the text and visual exemplar prompts, including the cases where they reinforce each other and where one restricts the other.

Live content is unavailable. Log in and register to view live content