Skip to yearly menu bar Skip to main content


Poster

Self-Calibrated Tuning of Vision-Language Models for Out-of-Distribution Detection

Geng Yu · Jianing Zhu · Jiangchao Yao · Bo Han

East Exhibit Hall A-C #4611
[ ]
Thu 12 Dec 11 a.m. PST — 2 p.m. PST

Abstract:

Out-of-distribution (OOD) detection is crucial for deploying reliable machine learning models in open-world applications. Recent advances in CLIP-based OOD detection have shown promising results via regularizing prompt tuning with OOD features extracted from ID data. However, the irrelevant context mined from ID data can be spurious due to the inaccurate foreground-background decomposition, thus limiting the OOD detection performance. In this work, we propose a novel framework, namely, \textit{Self-Calibrated Tuning (SCT)}, to mitigate this problem for effective OOD detection with only the given few-shot ID data. Specifically, SCT introduces modulating factors respectively on the two components of the original learning objective. It adaptively directs the optimization process between the two tasks during training on data with different prediction uncertainty to calibrate the influence of OOD regularization, which is compatible with many prompt tuning based OOD detection methods. Extensive experiments and analyses have been conducted to characterize and demonstrate the effectiveness of the proposed SCT. The code is publicly available at: https://github.com/tmlr-group/SCT.

Live content is unavailable. Log in and register to view live content