Skip to yearly menu bar Skip to main content


Search All 2024 Events
 

65 Results

<<   <   Page 3 of 6   >   >>
Poster
Fri 16:30 MetaAligner: Towards Generalizable Multi-Objective Alignment of Language Models
Kailai Yang · Zhiwei Liu · Qianqian Xie · Jimin Huang · Tianlin Zhang · Sophia Ananiadou
Poster
Fri 11:00 Can an AI Agent Safely Run a Government? Existence of Probably Approximately Aligned Policies
Frédéric Berdoz · Roger Wattenhofer
Poster
Fri 11:00 A-FedPD: Aligning Dual-Drift is All Federated Primal-Dual Learning Needs
Yan Sun · Li Shen · Dacheng Tao
Poster
Fri 11:00 A Critical Evaluation of AI Feedback for Aligning Large Language Models
Archit Sharma · Sedrick Scott Keh · Eric Mitchell · Chelsea Finn · Kushal Arora · Thomas Kollar
Oral
Fri 10:00 Flipped Classroom: Aligning Teacher Attention with Student in Generalized Category Discovery
Haonan Lin · Wenbin An · Jiahao Wang · Yan Chen · Feng Tian · Mengmeng Wang · QianYing Wang · Guang Dai · Jingdong Wang
Poster
Wed 16:30 Decoding-Time Language Model Alignment with Multiple Objectives
Ruizhe Shi · Yifang Chen · Yushi Hu · Alisa Liu · Hanna Hajishirzi · Noah Smith · Simon Du
Poster
Wed 11:00 Aligning to Thousands of Preferences via System Message Generalization
Seongyun Lee · Sue Hyun Park · Seungone Kim · Minjoon Seo
Poster
Wed 11:00 Regularized Conditional Diffusion Model for Multi-Task Preference Alignment
Xudong Yu · Chenjia Bai · Haoran He · Changhong Wang · Xuelong Li
Poster
Thu 11:00 Cal-DPO: Calibrated Direct Preference Optimization for Language Model Alignment
Teng Xiao · Yige Yuan · Huaisheng Zhu · Mingxiao Li · Vasant Honavar
Poster
Wed 16:30 Aligning Audio-Visual Joint Representations with an Agentic Workflow
Shentong Mo · Yibing Song
Workshop
Aligning Touch, Vision, and Language for Multimodal Perception
Max Fu · Gaurav Datta · Huang Huang · William Panitch · Jaimyn Drake · Joseph Ortiz · Mustafa Mukadam · Mike Lambeta · Roberto Calandra · Ken Goldberg
Workshop
Value pluralism and AI value alignment
Atoosa Kasirzadeh