Timezone: »

What Matters for Adversarial Imitation Learning?
Manu Orsini · Anton Raichuk · Leonard Hussenot · Damien Vincent · Robert Dadashi · Sertan Girgin · Matthieu Geist · Olivier Bachem · Olivier Pietquin · Marcin Andrychowicz

Wed Dec 08 12:30 AM -- 02:00 AM (PST) @ None #None

Adversarial imitation learning has become a popular framework for imitation in continuous control. Over the years, several variations of its components were proposed to enhance the performance of the learned policies as well as the sample complexity of the algorithm. In practice, these choices are rarely tested all together in rigorous empirical studies.It is therefore difficult to discuss and understand what choices, among the high-level algorithmic options as well as low-level implementation details, matter. To tackle this issue, we implement more than 50 of these choices in a generic adversarial imitation learning frameworkand investigate their impacts in a large-scale study (>500k trained agents) with both synthetic and human-generated demonstrations. We analyze the key results and highlight the most surprising findings.

Author Information

Manu Orsini (Google)
Anton Raichuk (Google)
Leonard Hussenot (Google Research, Brain Team)
Damien Vincent (Google Brain)
Robert Dadashi (Google Brain)
Sertan Girgin
Matthieu Geist (Université de Lorraine)
Olivier Bachem (Google Brain)
Olivier Pietquin (Googlel Brain)
Marcin Andrychowicz (Google DeepMind)

More from the Same Authors