Timezone: »

OrdinalCLIP: Learning Rank Prompts for Language-Guided Ordinal Regression
Wanhua Li · Xiaoke Huang · Zheng Zhu · Yansong Tang · Xiu Li · Jie Zhou · Jiwen Lu


This paper presents a language-powered paradigm for ordinal regression. Existing methods usually treat each rank as a category and employ a set of weights to learn these concepts. These methods are easy to overfit and usually attain unsatisfactory performance as the learned concepts are mainly derived from the training set. Recent large pre-trained vision-language models like CLIP have shown impressive performance on various visual tasks. In this paper, we propose to learn the rank concepts from the rich semantic CLIP latent space. Specifically, we reformulate this task as an image-language matching problem with a contrastive objective, which regards labels as text and obtains a language prototype from a text encoder for each rank. While prompt engineering for CLIP is extremely time-consuming, we propose OrdinalCLIP, a differentiable prompting method for adapting CLIP for ordinal regression. OrdinalCLIP consists of learnable context tokens and learnable rank embeddings. The learnable rank embeddings are constructed by explicitly modeling numerical continuity, resulting in well-ordered, compact language prototypes in the CLIP space. Once learned, we can only save the language prototypes and discard the huge language model, resulting in zero additional computational overhead compared with the linear head counterpart. Experimental results show that our paradigm achieves competitive performance in general ordinal regression tasks, and gains improvements in few-shot and distribution shift settings for age estimation. The code is available at https://github.com/xk-huang/OrdinalCLIP.

Author Information

Wanhua Li (Tsinghua University)
Xiaoke Huang (Tsinghua University)
Xiaoke Huang

Xiaoke Huang received a B.E. degree in Computer Science from Beijing Normal University in 2021. He is currently pursuing a Master's degree at Shenzhen Internation Graduate School, Tsinghua University, Shenzhen, China. His research interests include computer vision and graphics, especially for human digitization and vision-language learning. He has published papers in top venues including CVPR and NeurIPS.

Zheng Zhu (Tsinghua University)
Yansong Tang (University of Oxford)
Xiu Li
Jie Zhou (Tsinghua University)
Jiwen Lu (Tsinghua University)

More from the Same Authors