Skip to yearly menu bar Skip to main content


Workshop

Pluralistic Alignment Workshop

Mikhail Terekhov · Moksh Jain · Ruyuan Wan · Maarten Sap · Mitchell Gordon · Dongyeop Kang · Caglar Gulcehre · Amy Zhang · He He

West Meeting Room 116, 117

Sat 14 Dec, 8:15 a.m. PST

Aligning AI with human preferences and societal values is increasingly important. Yet, today’s AI alignment methods have been shown to be insufficient for capturing the vast space of complex – and often conflicting – real-world values. Our workshop will discuss how to integrate diverse perspectives, values, and expertise into pluralistic AI alignment. We aim to explore new methods for multi-objective alignment by drawing inspiration from governance and consensus-building practices to address conflicting values in pluralistic AI alignment. Discussion will include technical approaches for dataset collection, algorithms development, and the design of human-AI interaction workflows that reflect pluralistic values among diverse populations. By gathering experts from various fields, this workshop seeks to foster interdisciplinary collaboration and push the boundaries of the understanding, development and practice of pluralistic AI alignment.

Live content is unavailable. Log in and register to view live content

Timezone: America/Los_Angeles

Schedule

Log in and register to view live content