Timezone: »
Adversarial imitation learning has become a popular framework for imitation in continuous control. Over the years, several variations of its components were proposed to enhance the performance of the learned policies as well as the sample complexity of the algorithm. In practice, these choices are rarely tested all together in rigorous empirical studies.It is therefore difficult to discuss and understand what choices, among the high-level algorithmic options as well as low-level implementation details, matter. To tackle this issue, we implement more than 50 of these choices in a generic adversarial imitation learning frameworkand investigate their impacts in a large-scale study (>500k trained agents) with both synthetic and human-generated demonstrations. We analyze the key results and highlight the most surprising findings.
Author Information
Manu Orsini (Google)
Anton Raichuk (Google)
Leonard Hussenot (Google Research, Brain Team)
Damien Vincent (Google Brain)
Robert Dadashi (Google Brain)
Sertan Girgin
Matthieu Geist (Université de Lorraine)
Olivier Bachem (Google Brain)
Olivier Pietquin (Google Research Brain Team)
Marcin Andrychowicz (Google DeepMind)
More from the Same Authors
-
2021 : Brax - A Differentiable Physics Engine for Large Scale Rigid Body Simulation »
Daniel Freeman · Erik Frey · Anton Raichuk · Sertan Girgin · Igor Mordatch · Olivier Bachem -
2021 : Continuous Control With Ensemble Deep Deterministic Policy Gradients »
Piotr Januszewski · Mateusz Olko · Michał Królikowski · Jakub Swiatkowski · Marcin Andrychowicz · Łukasz Kuciński · Piotr Miłoś -
2021 : Continuous Control with Action Quantization from Demonstrations »
Robert Dadashi · Leonard Hussenot · Damien Vincent · Anton Raichuk · Matthieu Geist · Olivier Pietquin -
2021 : Implicitly Regularized RL with Implicit Q-values »
Nino Vieillard · Marcin Andrychowicz · Anton Raichuk · Olivier Pietquin · Matthieu Geist -
2022 Poster: Learning Energy Networks with Generalized Fenchel-Young Losses »
Mathieu Blondel · Felipe Llinares-Lopez · Robert Dadashi · Leonard Hussenot · Matthieu Geist -
2021 Poster: Twice regularized MDPs and the equivalence between robustness and regularization »
Esther Derman · Matthieu Geist · Shie Mannor -
2021 Poster: There Is No Turning Back: A Self-Supervised Approach for Reversibility-Aware Reinforcement Learning »
Nathan Grinsztajn · Johan Ferret · Olivier Pietquin · philippe preux · Matthieu Geist -
2019 : Poster and Coffee Break 2 »
Karol Hausman · Kefan Dong · Ken Goldberg · Lihong Li · Lin Yang · Lingxiao Wang · Lior Shani · Liwei Wang · Loren Amdahl-Culleton · Lucas Cassano · Marc Dymetman · Marc Bellemare · Marcin Tomczak · Margarita Castro · Marius Kloft · Marius-Constantin Dinu · Markus Holzleitner · Martha White · Mengdi Wang · Michael Jordan · Mihailo Jovanovic · Ming Yu · Minshuo Chen · Moonkyung Ryu · Muhammad Zaheer · Naman Agarwal · Nan Jiang · Niao He · Nikolaus Yasui · Nikos Karampatziakis · Nino Vieillard · Ofir Nachum · Olivier Pietquin · Ozan Sener · Pan Xu · Parameswaran Kamalaruban · Paul Mineiro · Paul Rolland · Philip Amortila · Pierre-Luc Bacon · Prakash Panangaden · Qi Cai · Qiang Liu · Quanquan Gu · Raihan Seraj · Richard Sutton · Rick Valenzano · Robert Dadashi · Rodrigo Toro Icarte · Roshan Shariff · Roy Fox · Ruosong Wang · Saeed Ghadimi · Samuel Sokota · Sean Sinclair · Sepp Hochreiter · Sergey Levine · Sergio Valcarcel Macua · Sham Kakade · Shangtong Zhang · Sheila McIlraith · Shie Mannor · Shimon Whiteson · Shuai Li · Shuang Qiu · Wai Lok Li · Siddhartha Banerjee · Sitao Luan · Tamer Basar · Thinh Doan · Tianhe Yu · Tianyi Liu · Tom Zahavy · Toryn Klassen · Tuo Zhao · Vicenç Gómez · Vincent Liu · Volkan Cevher · Wesley Suttle · Xiao-Wen Chang · Xiaohan Wei · Xiaotong Liu · Xingguo Li · Xinyi Chen · Xingyou Song · Yao Liu · YiDing Jiang · Yihao Feng · Yilun Du · Yinlam Chow · Yinyu Ye · Yishay Mansour · · Yonathan Efroni · Yongxin Chen · Yuanhao Wang · Bo Dai · Chen-Yu Wei · Harsh Shrivastava · Hongyang Zhang · Qinqing Zheng · SIDDHARTHA SATPATHI · Xueqing Liu · Andreu Vall -
2019 Poster: Are Disentangled Representations Helpful for Abstract Visual Reasoning? »
Sjoerd van Steenkiste · Francesco Locatello · Jürgen Schmidhuber · Olivier Bachem -
2019 Poster: On the Fairness of Disentangled Representations »
Francesco Locatello · Gabriele Abbati · Thomas Rainforth · Stefan Bauer · Bernhard Schölkopf · Olivier Bachem -
2019 Poster: On the Transfer of Inductive Bias from Simulation to the Real World: a New Disentanglement Dataset »
Muhammad Waleed Gondal · Manuel Wuethrich · Djordje Miladinovic · Francesco Locatello · Martin Breidt · Valentin Volchkov · Joel Akpo · Olivier Bachem · Bernhard Schölkopf · Stefan Bauer -
2019 Poster: Adaptive Temporal-Difference Learning for Policy Evaluation with Per-State Uncertainty Estimates »
Carlos Riquelme · Hugo Penedones · Damien Vincent · Hartmut Maennel · Sylvain Gelly · Timothy A Mann · Andre Barreto · Gergely Neu -
2019 Poster: A Geometric Perspective on Optimal Representations for Reinforcement Learning »
Marc Bellemare · Will Dabney · Robert Dadashi · Adrien Ali Taiga · Pablo Samuel Castro · Nicolas Le Roux · Dale Schuurmans · Tor Lattimore · Clare Lyle -
2018 Poster: Assessing Generative Models via Precision and Recall »
Mehdi S. M. Sajjadi · Olivier Bachem · Mario Lucic · Olivier Bousquet · Sylvain Gelly -
2017 Poster: Is the Bellman residual a bad proxy? »
Matthieu Geist · Bilal Piot · Olivier Pietquin -
2017 Poster: Reconstruct & Crush Network »
Erinc Merdivan · Mohammad Reza Loghmani · Matthieu Geist -
2014 Poster: Difference of Convex Functions Programming for Reinforcement Learning »
Bilal Piot · Matthieu Geist · Olivier Pietquin -
2014 Spotlight: Difference of Convex Functions Programming for Reinforcement Learning »
Bilal Piot · Matthieu Geist · Olivier Pietquin -
2012 Poster: Inverse Reinforcement Learning through Structured Classification »
Edouard Klein · Matthieu Geist · BILAL PIOT · Olivier Pietquin