Skip to yearly menu bar Skip to main content


Search All 2024 Events
 

65 Results

<<   <   Page 2 of 6   >   >>
Poster
Wed 11:00 SafeWorld: Geo-Diverse Safety Alignment
Da Yin · Haoyi Qiu · Kung-Hsiang Huang · Kai-Wei Chang · Nanyun Peng
Poster
Fri 11:00 Flipped Classroom: Aligning Teacher Attention with Student in Generalized Category Discovery
Haonan Lin · Wenbin An · Jiahao Wang · Yan Chen · Feng Tian · Mengmeng Wang · QianYing Wang · Guang Dai · Jingdong Wang
Poster
Wed 16:30 In-N-Out: Lifting 2D Diffusion Prior for 3D Object Removal via Tuning-Free Latents Alignment
Dongting Hu · Huan Fu · Jiaxian Guo · Liuhua Peng · Tingjin Chu · Feng Liu · Tongliang Liu · Mingming Gong
Poster
Wed 11:00 Aligning to Thousands of Preferences via System Message Generalization
Seongyun Lee · Sue Hyun Park · Seungone Kim · Minjoon Seo
Poster
Fri 16:30 Axioms for AI Alignment from Human Feedback
Luise Ge · Daniel Halpern · Evi Micha · Ariel Procaccia · Itai Shapira · Yevgeniy Vorobeychik · Junlin Wu
Poster
Fri 16:30 MetaAligner: Towards Generalizable Multi-Objective Alignment of Language Models
Kailai Yang · Zhiwei Liu · Qianqian Xie · Jimin Huang · Tianlin Zhang · Sophia Ananiadou
Poster
Fri 11:00 Can an AI Agent Safely Run a Government? Existence of Probably Approximately Aligned Policies
Frédéric Berdoz · Roger Wattenhofer
Poster
Fri 11:00 A-FedPD: Aligning Dual-Drift is All Federated Primal-Dual Learning Needs
Yan Sun · Li Shen · Dacheng Tao
Poster
Fri 16:30 Easy-to-Hard Generalization: Scalable Alignment Beyond Human Supervision
Zhiqing Sun · Longhui Yu · Yikang Shen · Weiyang Liu · Yiming Yang · Sean Welleck · Chuang Gan
Poster
Fri 11:00 A Critical Evaluation of AI Feedback for Aligning Large Language Models
Archit Sharma · Sedrick Scott Keh · Eric Mitchell · Chelsea Finn · Kushal Arora · Thomas Kollar
Workshop
Value pluralism and AI value alignment
Atoosa Kasirzadeh
Workshop
Sat 16:15 Hannah Rose Kirk: Putting the H Back in RLHF: Challenging assumptions of human behaviour for AI alignment