Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Workshop on robustness of zero/few-shot learning in foundation models (R0-FoMo)

How Do Large Multimodal Models Really Fare in Classical Vision Few-Shot Challenges? A Deep Dive

Qing Guo · Prashan Wanigasekara · Jian Zheng · Jacob Fang · Xinwei Deng · Chenyang Tao


Abstract:

Recent advances in multimodal foundational models have demonstrated marvelous in-context learning capabilities for diverse vision-language tasks. However, existing literature has mainly focused on few-shot learning tasks similar to their NLP counterparts. It is unclear whether these foundation models can also address classical vision challenges such as few-shot classification, which in some settings (e.g., 5-way 5-shot) necessitates sophisticated reasoning over several dozens of images -- a challenging task for learning systems. In this work, we take a deep dive to probe the potential and limitations of existing multimodal models on this problem. Our investigation reveals that while these models under careful calibration can outperform dedicated visual models in complex narratable scenes, they can falter with more abstract visual inputs. Moreover, we also investigate curriculum learning and find out how it can mitigate the performance gap via smoothly bridging verbal and nonverbal reasoning for vision language tasks.

Chat is not available.