Timezone: »
We propose to learn to distinguish reversible from irreversible actions for better informed decision-making in Reinforcement Learning (RL). From theoretical considerations, we show that approximate reversibility can be learned through a simple surrogate task: ranking randomly sampled trajectory events in chronological order. Intuitively, pairs of events that are always observed in the same order are likely to be separated by an irreversible sequence of actions. Conveniently, learning the temporal order of events can be done in a fully self-supervised way, which we use to estimate the reversibility of actions from experience, without any priors.We propose two different strategies that incorporate reversibility in RL agents, one strategy for exploration (RAE) and one strategy for control (RAC). We demonstrate the potential of reversibility-aware agents in several environments, including the challenging Sokoban game. In synthetic tasks, we show that we can learn control policies that never fail and reduce to zero the side-effects of interactions, even without access to the reward function.
Author Information
Nathan Grinsztajn (Inria)
Johan Ferret (Google Brain / Inria Scool)
Olivier Pietquin (Google Research Brain Team)
philippe preux (Inria)
Matthieu Geist (Université de Lorraine)
More from the Same Authors
-
2021 : Continuous Control with Action Quantization from Demonstrations »
Robert Dadashi · Leonard Hussenot · Damien Vincent · Anton Raichuk · Matthieu Geist · Olivier Pietquin -
2021 : Implicitly Regularized RL with Implicit Q-values »
Nino Vieillard · Marcin Andrychowicz · Anton Raichuk · Olivier Pietquin · Matthieu Geist -
2022 : Better state exploration using action sequence equivalence »
Nathan Grinsztajn · Toby Johnstone · Johan Ferret · philippe preux -
2021 Poster: Twice regularized MDPs and the equivalence between robustness and regularization »
Esther Derman · Matthieu Geist · Shie Mannor -
2021 Poster: What Matters for Adversarial Imitation Learning? »
Manu Orsini · Anton Raichuk · Leonard Hussenot · Damien Vincent · Robert Dadashi · Sertan Girgin · Matthieu Geist · Olivier Bachem · Olivier Pietquin · Marcin Andrychowicz -
2019 : Poster Presentations »
Rahul Mehta · Andrew Lampinen · Binghong Chen · Sergio Pascual-Diaz · Jordi Grau-Moya · Aldo Faisal · Jonathan Tompson · Yiren Lu · Khimya Khetarpal · Martin Klissarov · Pierre-Luc Bacon · Doina Precup · Thanard Kurutach · Aviv Tamar · Pieter Abbeel · Jinke He · Maximilian Igl · Shimon Whiteson · Wendelin Boehmer · Raphaël Marinier · Olivier Pietquin · Karol Hausman · Sergey Levine · Chelsea Finn · Tianhe Yu · Lisa Lee · Benjamin Eysenbach · Emilio Parisotto · Eric Xing · Ruslan Salakhutdinov · Hongyu Ren · Anima Anandkumar · Deepak Pathak · Christopher Lu · Trevor Darrell · Alexei Efros · Phillip Isola · Feng Liu · Bo Han · Gang Niu · Masashi Sugiyama · Saurabh Kumar · Janith Petangoda · Johan Ferret · James McClelland · Kara Liu · Animesh Garg · Robert Lange -
2019 : Oral Presentations »
Janith Petangoda · Sergio Pascual-Diaz · Jordi Grau-Moya · Raphaël Marinier · Olivier Pietquin · Alexei Efros · Phillip Isola · Trevor Darrell · Christopher Lu · Deepak Pathak · Johan Ferret -
2017 Poster: Is the Bellman residual a bad proxy? »
Matthieu Geist · Bilal Piot · Olivier Pietquin -
2017 Poster: Reconstruct & Crush Network »
Erinc Merdivan · Mohammad Reza Loghmani · Matthieu Geist -
2014 Poster: Difference of Convex Functions Programming for Reinforcement Learning »
Bilal Piot · Matthieu Geist · Olivier Pietquin -
2014 Spotlight: Difference of Convex Functions Programming for Reinforcement Learning »
Bilal Piot · Matthieu Geist · Olivier Pietquin -
2012 Poster: Inverse Reinforcement Learning through Structured Classification »
Edouard Klein · Matthieu Geist · BILAL PIOT · Olivier Pietquin