Timezone: »
We present a novel form of explanation for Reinforcement Learning, based around the notion of intended outcome. These explanations describe the outcome an agent is trying to achieve by its actions. We provide a simple proof that general methods for post-hoc explanations of this nature are impossible in traditional reinforcement learning. Rather, the information needed for the explanations must be collected in conjunction with training the agent. We derive approaches designed to extract local explanations based on intention for several variants of Q-function approximation and prove consistency between the explanations and the Q-values learned. We demonstrate our method on multiple reinforcement learning problems, and provide code to help researchers introspecting their RL environments and algorithms.
Author Information
Herman Yau (University of Surrey)
Chris Russell (Amazon Web Services)
Simon Hadfield (University of Surrey)
More from the Same Authors
-
2019 Poster: Fixing Implicit Derivatives: Trust-Region Based Learning of Continuous Energy Functions »
Chris Russell · Matteo Toso · Neill Campbell -
2017 Poster: VEEGAN: Reducing Mode Collapse in GANs using Implicit Variational Learning »
Akash Srivastava · Lazar Valkov · Chris Russell · Michael Gutmann · Charles Sutton -
2017 Poster: Counterfactual Fairness »
Matt Kusner · Joshua Loftus · Chris Russell · Ricardo Silva -
2017 Oral: Counterfactual Fairness »
Matt Kusner · Joshua Loftus · Chris Russell · Ricardo Silva -
2017 Poster: When Worlds Collide: Integrating Different Counterfactual Assumptions in Fairness »
Chris Russell · Matt Kusner · Joshua Loftus · Ricardo Silva