Workshop
|
Sat 12:00
|
Mitigate the Gap: Investigating Approaches for Improving Cross-Modal Alignment in CLIP
Sedigheh (Sarah) Eslami · Gerard de Melo
|
|
Workshop
|
Sat 16:15
|
BLAP: Bootstrapping Language-Audio Pre-training for Music Captioning
|
|
Poster
|
Thu 11:00
|
PLIP: Language-Image Pre-training for Person Representation Learning
Jialong Zuo · Jiahao Hong · Feng Zhang · Changqian Yu · Hanyu Zhou · Changxin Gao · Nong Sang · Jingdong Wang
|
|
Workshop
|
Sat 14:30
|
How to build fully open language models: from pre-training to post-training
Hannaneh Hajishirzi
|
|
Poster
|
Fri 11:00
|
LoTLIP: Improving Language-Image Pre-training for Long Text Understanding
Wei Wu · Kecheng Zheng · Shuailei Ma · Fan Lu · Yuxin Guo · Yifei Zhang · Wei Chen · Qingpei Guo · Yujun Shen · Zheng-Jun Zha
|
|
Workshop
|
|
OC-CLIP : Object-centric Binding in Contrastive Language-Image Pretraining
Rim Assouel · Pietro Astolfi · Florian Bordes · Michal Drozdzal · Adriana Romero
|
|
Workshop
|
Sat 14:45
|
Contributed talk: Evaluating Gender Bias Transfer between Pre-trained and Prompt Adapted Language Models
Natalie Mackraz
|
|
Poster
|
Wed 11:00
|
Reproducibility study of “LICO: Explainable Models with Language-Image Consistency"
Luan Fletcher · Robert van der Klis · Martin Sedlacek · Stefan Vasilev · Christos Athanasiadis
|
|
Workshop
|
|
On Pre-training of Multimodal Language Models Customized for Chart Understanding
Wan-Cyuan Fan · Yen-Chun Chen · Mengchen Liu · Lu Yuan · Leonid Sigal
|
|
Workshop
|
Sat 11:00
|
Optimizing Data Use for Efficient Pre-training
Danqi Chen
|
|
Poster
|
Wed 16:30
|
Conjugated Semantic Pool Improves OOD Detection with Pre-trained Vision-Language Models
Mengyuan Chen · Junyu Gao · Changsheng Xu
|
|
Poster
|
Fri 11:00
|
Classification Done Right for Vision-Language Pre-Training
Zilong Huang · Qinghao Ye · Bingyi Kang · Jiashi Feng · Haoqi Fan
|
|