Skip to yearly menu bar Skip to main content


Oral
in
Workshop: Workshop on Federated Learning in the Age of Foundation Models in Conjunction with NeurIPS 2023 (FL@FM-NeurIPS'23)

Text-driven Prompt Generation for Vision-Language Models in Federated Learning

Chen Qiu · Xingyu Li · Chaithanya Kumar Mummadi · Madan Ganesh · Zhenzhen Li · Lu Peng · Wan-Yi Lin

Keywords: [ Federated Learning; Prompt Learning; Vision-Language Models ]

[ ] [ Project Page ]
Sat 16 Dec 6:30 a.m. PST — 6:40 a.m. PST

Abstract:

Prompt learning for vision-language models, e.g., CoOp, has shown great success in adapting CLIP to different downstream tasks, making it a promising solution for federated learning due to computational reasons. Existing prompt learning techniques replace hand-crafted text prompts with learned vectors that offer improvements on seen classes, but struggle to generalize to unseen classes. Our work addresses this challenge by proposing Federated Text-driven Prompt Generation (FedTPG), which learns a unified prompt generation network across multiple remote clients in a scalable manner. The prompt generation network is conditioned on task-related text input, thus is context-aware, making it suitable to generalize for both seen and unseen classes. Our comprehensive empirical evaluations on nine diverse image classification datasets show that our method is superior to existing federated prompt learning methods, that achieve overall better generalization on both seen and unseen classes and is also generalizable to unseen datasets.

Chat is not available.