Skip to yearly menu bar Skip to main content


Poster

No Filter: Cultural and Socioeconomic Diversity in Contrastive Vision-Language Models

AngĂ©line Pouget · Lucas Beyer · Emanuele Bugliarello · Xiao Wang · Andreas Steiner · Xiaohua Zhai · Ibrahim Alabdulmohsin

East Exhibit Hall A-C #3810
[ ]
Wed 11 Dec 11 a.m. PST — 2 p.m. PST

Abstract:

We study cultural and socioeconomic diversity in contrastive vision-language models (VLMs). Using a broad range of benchmark datasets and evaluation metrics, we bring to attention several important findings. First, the common filtering of training data to English image-text pairs disadvantages communities of lower socioeconomic status and negatively impacts cultural understanding. Notably, this performance gap is not captured by - and even at odds with - the currently popular evaluation metrics derived from the Western-centric ImageNet and COCO datasets. Second, pretraining with global, unfiltered data before fine-tuning on English content can improve cultural understanding without sacrificing performance on said popular benchmarks. Third, we introduce the task of geo-localization as a novel evaluation metric to assess cultural diversity in VLMs. Our work underscores the value of using diverse data to create more inclusive multimodal systems and lays the groundwork for developing VLMs that better represent global perspectives.

Live content is unavailable. Log in and register to view live content