( events)   Timezone: »  
Program Highlights »
Workshop
Sat Dec 09 08:00 AM -- 06:30 PM (PST) @ 201 B
Aligned Artificial Intelligence
Dylan Hadfield-Menell · Jacob Steinhardt · David Duvenaud · David Krueger · Anca Dragan





Workshop Home Page

In order to be helpful to users and to society at large, an autonomous agent needs to be aligned with the objectives of its stakeholders. Misaligned incentives are a common and crucial problem with human agents --- we should expect similar challenges to arise from misaligned incentives with artificial agents. For example, it is not uncommon to see reinforcement learning agents ‘hack’ their specified reward function. How do we build learning systems that will reliably achieve a user's intended objective? How can we ensure that autonomous agents behave reliably in unforeseen situations? How do we design systems whose behavior will be aligned with the values and goals of society at large? As AI capabilities develop, it is crucial for the AI community to come to satisfying and trustworthy answers to these questions. This workshop will focus on three central challenges in value alignment: learning complex rewards that reflect human preferences (e.g. meaningful oversight, preference elicitation, inverse reinforcement learning, learning from demonstration or feedback), engineering reliable AI systems (e.g. robustness to distributional shift, model misspecification, or adversarial data, via methods such as adversarial training, KWIK-style learning, or transparency to human inspection), and dealing with bounded rationality and incomplete information in both AI systems and their users (e.g. acting on incomplete task specifications, learning from users who sometimes make mistakes). We also welcome submissions that do not directly fit these categories but generally deal with problems relating to value alignment in artificial intelligence.

Opening Remarks (Talk)
Dylan Hadfield-Menell
Dynamic Safe Interruptibility for Decentralized Multi-Agent Reinforcement Learning (Talk)
Hadrien Hendrikx
Minimax-Regret Querying on Side Effects in Factored Markov Decision Processes (Talk)
Satinder Singh
Robust Covariate Shift with Exact Loss Functions (Contributed Talk)
Angie Liu
Adversarial Robustness for Aligned AI (Talk)
Ian Goodfellow
Incomplete Contracting and AI Alignment (Talk)
Gillian Hadfield
Learning from Human Feedback (Talk)
Paul Christiano
Finite Supervision Reinforcement Learning (Contributed Talk)
William Saunders, Eric Langlois
Safer Classification by Synthesis (Contributed Talk)
William Wang
Aligned AI Poster Session (Poster Session)
Amanda Askell, Rafal Muszynski, William Wang, Yaodong Yang, Quoc Nguyen, Bryan Kian Hsiang Low, Patrick Jaillet, Candice Schumann, Angie Liu, Peter Eckersley, Angelina Wang, William Saunders
Machine Learning for Human Deliberative Judgment (Talk)
Owain Evans
Learning Reward Functions (Talk)
Jan Leike
Informal Technical Discussion: Open Problems in AI Alignment (Discussion)