Skip to yearly menu bar Skip to main content


Search All 2024 Events
 

44 Results

<<   <   Page 3 of 4   >   >>
Workshop
Sat 15:45 Mitigating LLM Hallucinations via ConformalAbstention
Yasin Abbasi Yadkori · Ilja Kuzborskij · David Stutz · András György · Adam Fisch · Arnaud Doucet · Iuliya Beloshapka · Wei-Hung Weng · Yao-Yuan Yang · Csaba Szepesvari · Taylan Cemgil · Nenad Tomasev
Workshop
Decreasing Inconsistencies in Differentially Private Language Models through Self-Distillation
Kieleh Ngong Ivoline Clarisse · Joseph Near · Niloofar Mireshghallah
Workshop
Incorporating Generative Feedback for Mitigating Hallucinations in Large Vision-Language Models
Ce Zhang · Zifu Wan · Zhehan Kan · Martin Q. Ma · Simon Stepputtis · Deva Ramanan · Ruslan Salakhutdinov · Louis-Philippe Morency · Katia Sycara · Yaqi Xie
Workshop
Characterizing Context Memorization and Hallucination of Language Models
James Flemings · Wanrong Zhang · Bo Jiang · Zafar Takhirov · Murali Annavaram
Workshop
Interactive Semantic Interventions for VLMs: A Causality-Inspired Investigation of VLM Failures
Lukas Klein · Kenza Amara · Carsten Lüth · Hendrik Strobelt · Mennatallah El-Assady · Paul Jaeger
Workshop
Interactive Semantic Interventions for VLMs: Breaking VLMs with Human Ingenuity
Lukas Klein · Kenza Amara · Carsten Lüth · Hendrik Strobelt · Mennatallah El-Assady · Paul Jaeger
Workshop
LLM Hallucination Reasoning with Zero-shot Knowledge Test
Seongmin Lee · Hsiang Hsu · Richard Chen
Workshop
Sat 12:00 To Believe or Not to Believe Your LLM
Yasin Abbasi Yadkori · Ilja Kuzborskij · András György · Csaba Szepesvari
Workshop
Sat 12:00 Just rephrase it! Uncertainty estimation in closed-source language models via multiple rephrased queries
Adam Yang · CHEN CHEN · Konstantinos Pitas
Workshop
Sat 15:45 Interactive Semantic Interventions for VLMs: A Human-in-the-Loop Approach to Interpretability
Lukas Klein · Kenza Amara · Carsten Lüth · Hendrik Strobelt · Mennatallah El-Assady · Paul Jaeger
Workshop
Sat 15:45 Mitigating Hallucination in Large Language Models with Explanatory Prompting
Alexander Braverman · Weitong Zhang · Quanquan Gu
Workshop
HalLoc: Token-level Localization of Hallucinations for Large Vision Language Models
Eunkyu Park · Minyeong Kim · Gunhee Kim