Poster
|
Thu 16:30
|
Hallo3D: Multi-Modal Hallucination Detection and Mitigation for Consistent 3D Content Generation
Hongbo Wang · Jie Cao · Jin Liu · Xiaoqiang Zhou · Huaibo Huang · Ran He
|
|
Poster
|
Wed 11:00
|
Mitigating Object Hallucination via Concentric Causal Attention
Yun Xing · Yiheng Li · Ivan Laptev · Shijian Lu
|
|
Workshop
|
|
HSCL-RL: Mitigating Hallucinations in Multimodal Large Language Models
Zichen Song · 思潭 黄
|
|
Workshop
|
|
Pre-Training Multimodal Hallucination Detectors with Corrupted Grounding Data
Spencer Whitehead · Jacob Phillips · Sean Hendryx
|
|
Workshop
|
Sat 12:00
|
H-POPE: Hierarchical Polling-based Probing Evaluation of Hallucinations in Large Vision-Language Models
Nhi Pham · Michael Schott
|
|
Workshop
|
|
Mitigating Hallucination in Large Vision-Language Models via Modular Attribution and Intervention
Tianyun Yang · Ziniu Li · Juan Cao · Chang Xu
|
|
Workshop
|
|
Hallucination Detox: Sensitive Neuron Dropout (SeND) for Large Language Model Training
Shahrad Mohammadzadeh · Juan D. Guerra · Marco Bonizzato · Reihaneh Rabbany · Golnoosh Farnadi
|
|
Workshop
|
|
Just rephrase it! Uncertainty estimation in closed-source language models via multiple rephrased queries
Adam Yang · CHEN CHEN · Konstantinos Pitas
|
|
Workshop
|
|
Multilingual Hallucination Gaps in Large Language Models
Cléa Chataigner · Afaf Taik · Golnoosh Farnadi
|
|
Workshop
|
|
THaMES: An End-to-End Tool for Hallucination Mitigation and Evaluation in Large Language Models
Mengfei Liang · Archish Arun · Zekun Wu · CRISTIAN VILLALOBOS · Jonathan Lutch · Emre Kazim · Adriano Koshiyama · Philip Treleaven
|
|
Workshop
|
|
CrossCheckGPT: Universal Hallucination Ranking for Multimodal Foundation Models
Guangzhi Sun · Potsawee Manakul · Adian Liusie · Kunat Pipatanakul · Chao Zhang · Phil Woodland · Mark Gales
|
|
Workshop
|
|
Trust but Verify: Reliable VLM evaluation in-the-wild with program synthesis
Viraj Uday Prabhu · Senthil Purushwalkam · Jieyu Zhang · An Yan · Caiming Xiong · Ran Xu
|
|