Skip to yearly menu bar Skip to main content


( events)   Timezone:  
Workshop
Sat Dec 09 08:00 AM -- 06:30 PM (PST) @ 201 B
Aligned Artificial Intelligence
Dylan Hadfield-Menell · Jacob Steinhardt · David Duvenaud · David Krueger · Anca Dragan





Workshop Home Page

In order to be helpful to users and to society at large, an autonomous agent needs to be aligned with the objectives of its stakeholders. Misaligned incentives are a common and crucial problem with human agents --- we should expect similar challenges to arise from misaligned incentives with artificial agents. For example, it is not uncommon to see reinforcement learning agents ‘hack’ their specified reward function. How do we build learning systems that will reliably achieve a user's intended objective? How can we ensure that autonomous agents behave reliably in unforeseen situations? How do we design systems whose behavior will be aligned with the values and goals of society at large? As AI capabilities develop, it is crucial for the AI community to come to satisfying and trustworthy answers to these questions. This workshop will focus on three central challenges in value alignment: learning complex rewards that reflect human preferences (e.g. meaningful oversight, preference elicitation, inverse reinforcement learning, learning from demonstration or feedback), engineering reliable AI systems (e.g. robustness to distributional shift, model misspecification, or adversarial data, via methods such as adversarial training, KWIK-style learning, or transparency to human inspection), and dealing with bounded rationality and incomplete information in both AI systems and their users (e.g. acting on incomplete task specifications, learning from users who sometimes make mistakes). We also welcome submissions that do not directly fit these categories but generally deal with problems relating to value alignment in artificial intelligence.

Opening Remarks (Talk)
Dynamic Safe Interruptibility for Decentralized Multi-Agent Reinforcement Learning (Talk)
Minimax-Regret Querying on Side Effects in Factored Markov Decision Processes (Talk)
Robust Covariate Shift with Exact Loss Functions (Contributed Talk)
Adversarial Robustness for Aligned AI (Talk)
Incomplete Contracting and AI Alignment (Talk)
Learning from Human Feedback (Talk)
Finite Supervision Reinforcement Learning (Contributed Talk)
Safer Classification by Synthesis (Contributed Talk)
Aligned AI Poster Session (Poster Session)
Machine Learning for Human Deliberative Judgment (Talk)
Learning Reward Functions (Talk)
Informal Technical Discussion: Open Problems in AI Alignment (Discussion)