Program Highlights »
Workshop
Sat Dec 9th 08:00 AM -- 06:30 PM @ 201 B
Aligned Artificial Intelligence
Dylan Hadfield-Menell · Jacob Steinhardt · David Duvenaud · David Krueger · Anca Dragan





Workshop Home Page

In order to be helpful to users and to society at large, an autonomous agent needs to be aligned with the objectives of its stakeholders. Misaligned incentives are a common and crucial problem with human agents --- we should expect similar challenges to arise from misaligned incentives with artificial agents. For example, it is not uncommon to see reinforcement learning agents ‘hack’ their specified reward function. How do we build learning systems that will reliably achieve a user's intended objective? How can we ensure that autonomous agents behave reliably in unforeseen situations? How do we design systems whose behavior will be aligned with the values and goals of society at large? As AI capabilities develop, it is crucial for the AI community to come to satisfying and trustworthy answers to these questions. This workshop will focus on three central challenges in value alignment: learning complex rewards that reflect human preferences (e.g. meaningful oversight, preference elicitation, inverse reinforcement learning, learning from demonstration or feedback), engineering reliable AI systems (e.g. robustness to distributional shift, model misspecification, or adversarial data, via methods such as adversarial training, KWIK-style learning, or transparency to human inspection), and dealing with bounded rationality and incomplete information in both AI systems and their users (e.g. acting on incomplete task specifications, learning from users who sometimes make mistakes). We also welcome submissions that do not directly fit these categories but generally deal with problems relating to value alignment in artificial intelligence.

09:15 AM Opening Remarks (Talk)
Dylan Hadfield-Menell
09:30 AM Dynamic Safe Interruptibility for Decentralized Multi-Agent Reinforcement Learning (Talk)
Hadrien Hendrikx
09:45 AM Minimax-Regret Querying on Side Effects in Factored Markov Decision Processes (Talk)
Satinder Singh
10:15 AM Robust Covariate Shift with Exact Loss Functions (Contributed Talk)
Angie Liu
11:00 AM Adversarial Robustness for Aligned AI (Talk)
Ian Goodfellow
11:30 AM Incomplete Contracting and AI Alignment (Talk)
Gillian Hadfield
01:15 PM Learning from Human Feedback (Talk)
Paul Christiano
01:45 PM Finite Supervision Reinforcement Learning (Contributed Talk)
William Saunders, Eric Langlois
02:00 PM Safer Classification by Synthesis (Contributed Talk)
William Wang
02:15 PM Aligned AI Poster Session (Poster Session)
Amanda Askell, Rafal Muszynski, William Wang, Yaodong Yang, Quoc Nguyen, Bryan Kian Hsiang Low, Patrick Jaillet, Candice Schumann, Angie Liu, Peter Eckersley, Angelina Wang, William Saunders
03:30 PM Machine Learning for Human Deliberative Judgment (Talk)
Owain Evans
04:00 PM Learning Reward Functions (Talk)
Jan Leike
04:30 PM Informal Technical Discussion: Open Problems in AI Alignment (Discussion)