Skip to yearly menu bar Skip to main content


Search All 2024 Events
 

295 Results

<<   <   Page 3 of 25   >   >>
Poster
Wed 11:00 The PRISM Alignment Dataset: What Participatory, Representative and Individualised Human Feedback Reveals About the Subjective and Multicultural Alignment of Large Language Models
Hannah Rose Kirk · Alexander Whitefield · Paul Rottger · Andrew M. Bean · Katerina Margatina · Rafael Mosquera-Gomez · Juan Ciro · Max Bartolo · Adina Williams · He He · Bertie Vidgen · Scott Hale
Poster
Fri 11:00 ProgressGym: Alignment with a Millennium of Moral Progress
Tianyi (Alex) Qiu · Yang Zhang · Xuchuan Huang · Jasmine Li · Jiaming Ji · Yaodong Yang
Poster
Thu 11:00 Panacea: Pareto Alignment via Preference Adaptation for LLMs
Yifan Zhong · Chengdong Ma · Xiaoyuan Zhang · Ziran Yang · Haojun Chen · Qingfu Zhang · Siyuan Qi · Yaodong Yang
Poster
Wed 16:30 In-N-Out: Lifting 2D Diffusion Prior for 3D Object Removal via Tuning-Free Latents Alignment
Dongting Hu · Huan Fu · Jiaxian Guo · Liuhua Peng · Tingjin Chu · Feng Liu · Tongliang Liu · Mingming Gong
Poster
Thu 16:30 BoNBoN Alignment for Large Language Models and the Sweetness of Best-of-n Sampling
Lin Gui · Cristina Garbacea · Victor Veitch
Poster
Wed 16:30 Expectation Alignment: Handling Reward Misspecification in the Presence of Expectation Mismatch
Malek Mechergui · Sarath Sreedharan
Poster
Thu 11:00 Biomedical Visual Instruction Tuning with Clinician Preference Alignment
Hejie Cui · Lingjun Mao · Xin Liang · Jieyu Zhang · Hui Ren · Quanzheng Li · Xiang Li · Carl Yang
Expo Talk Panel
Wed 13:00 Industrial Applications of Distributional Preference Alignment of LLMs via Optimal Transport
Youssef Mroueh
Poster
Thu 16:30 Test-time Adaptation in Non-stationary Environments via Adaptive Representation Alignment
Zhen-Yu Zhang · Zhiyu Xie · Huaxiu Yao · Masashi Sugiyama
Oral
Wed 10:40 The PRISM Alignment Dataset: What Participatory, Representative and Individualised Human Feedback Reveals About the Subjective and Multicultural Alignment of Large Language Models
Hannah Rose Kirk · Alexander Whitefield · Paul Rottger · Andrew M. Bean · Katerina Margatina · Rafael Mosquera-Gomez · Juan Ciro · Max Bartolo · Adina Williams · He He · Bertie Vidgen · Scott Hale
Poster
Wed 16:30 FLAME : Factuality-Aware Alignment for Large Language Models
Sheng-Chieh Lin · Luyu Gao · Barlas Oguz · Wenhan Xiong · Jimmy Lin · Scott Yih · Xilun Chen
Poster
Wed 16:30 Interpretable Concept Bottlenecks to Align Reinforcement Learning Agents
Quentin Delfosse · Sebastian Sztwiertnia · Mark Rothermel · Wolfgang Stammer · Kristian Kersting