Skip to yearly menu bar Skip to main content


Poster

Web-Scale Visual Entity Recognition: An LLM-Driven Data Approach

Mathilde Caron · Alireza Fathi · Cordelia Schmid · Ahmet Iscen

East Exhibit Hall A-C #1310
[ ]
Fri 13 Dec 11 a.m. PST — 2 p.m. PST

Abstract:

Web-scale visual entity recognition, the task of associating images with their corresponding entities within vast knowledge bases like Wikipedia, presents significant challenges due to the lack of clean, large-scale training data. In this paper, we propose a novel methodology to curate such a dataset, leveraging a multimodal large language model (LLM) for label verification, metadata generation, and rationale explanation. Instead of relying on the multimodal LLM to directly annotate data, which we found to be suboptimal, we prompt it to reason about potential candidate entity labels by accessing additional contextually relevant information (such as Wikipedia), resulting in more accurate annotations. We further use the multimodal LLM to enrich the dataset by generating question-answer pairs and a grounded fine-grained textual description (referred to as "rationale") that explains the connection between images and their assigned entities. Experiments demonstrate that models trained on this automatically curated data achieve state-of-the-art performance on web-scale visual entity recognition tasks (e.g. +6.9% improvement in OVEN entity task), underscoring the importance of high-quality training data in this domain.

Live content is unavailable. Log in and register to view live content