Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Instruction Tuning and Instruction Following

CIEM: Contrastive Instruction Evaluation Method for Better Instruction Tuning

Hongyu Hu · Jiyuan Zhang · Minyi Zhao · Zhenbang Sun

Keywords: [ Instruction Tuning ] [ Evaluation ] [ Vision Language Model ]


Abstract: Nowadays, the research on Large Vision-Language Models (LVLMs) has been significantly promoted thanks to the success of Large Language Models (LLM). Nevertheless, these Vision-Language Models (VLMs) are suffering from the drawback of hallucination -- due to insufficient understanding of vision and language modalities, VLMs may generate incorrect perception information when doing downstream applications, for example, captioning a non-existent entity. To address the hallucination phenomenon, on the one hand, we introduce a $\textbf{C}$ontrastive $\textbf{I}$nstruction $\textbf{E}$valuation $\textbf{M}$ethod (CIEM), which is an automatic pipeline that leverages an annotated image-text dataset coupled with an LLM to generate factual/contrastive question-answer pairs for the evaluation of the hallucination of VLMs. On the other hand, based on CIEM, we further propose a new instruction tuning method called CIT (the abbreviation of $\textbf{C}$ontrastive $\textbf{I}$nstruction $\textbf{T}$uning) to alleviate the hallucination of VLMs by automatically producing high-quality factual/contrastive question-answer pairs and corresponding justifications for model tuning. Through extensive experiments on CIEM and CIT, we pinpoint the hallucination issues commonly present in existing VLMs, the disability of the current instruction-tuning dataset to handle the hallucination phenomenon and the superiority of CIT-tuned VLMs over both CIEM and public datasets. Please contact the authors for code and generated dataset.

Chat is not available.