Timezone: »
Continuous Control With Ensemble Deep Deterministic Policy Gradients
Piotr Januszewski · Mateusz Olko · Michał Królikowski · Jakub Swiatkowski · Marcin Andrychowicz · Łukasz Kuciński · Piotr Miłoś
Event URL: https://openreview.net/forum?id=TIUfoXsnxB »
The growth of deep reinforcement learning (RL) has brought multiple exciting tools and methods to the field. This rapid expansion makes it important to understand the interplay between individual elements of the RL toolbox. We approach this task from an empirical perspective by conducting a study in the continuous control setting. We present multiple insights of fundamental nature, including: a commonly used additive action noise is not required for effective exploration and can even hinder training; the performance of policies trained using existing methods varies significantly across training runs, epochs of training, and evaluation runs; the critics' initialization plays the major role in ensemble-based actor-critic exploration, while the training is mostly invariant to the actors' initialization; a strategy based on posterior sampling explores better than the approximated UCB combined with the weighted Bellman backup; the weighted Bellman backup alone cannot replace the clipped double Q-Learning. As a conclusion, we show how existing tools can be brought together in a novel way, giving rise to the Ensemble Deep Deterministic Policy Gradients (ED2) method, to yield state-of-the-art results on continuous control tasks from $\mbox{OpenAI Gym MuJoCo}$. From the practical side, ED2 is conceptually straightforward, easy to code, and does not require knowledge outside of the existing RL toolbox.
The growth of deep reinforcement learning (RL) has brought multiple exciting tools and methods to the field. This rapid expansion makes it important to understand the interplay between individual elements of the RL toolbox. We approach this task from an empirical perspective by conducting a study in the continuous control setting. We present multiple insights of fundamental nature, including: a commonly used additive action noise is not required for effective exploration and can even hinder training; the performance of policies trained using existing methods varies significantly across training runs, epochs of training, and evaluation runs; the critics' initialization plays the major role in ensemble-based actor-critic exploration, while the training is mostly invariant to the actors' initialization; a strategy based on posterior sampling explores better than the approximated UCB combined with the weighted Bellman backup; the weighted Bellman backup alone cannot replace the clipped double Q-Learning. As a conclusion, we show how existing tools can be brought together in a novel way, giving rise to the Ensemble Deep Deterministic Policy Gradients (ED2) method, to yield state-of-the-art results on continuous control tasks from $\mbox{OpenAI Gym MuJoCo}$. From the practical side, ED2 is conceptually straightforward, easy to code, and does not require knowledge outside of the existing RL toolbox.
Author Information
Piotr Januszewski (Uniwersytet Warszawski, ul. Krakowskie Przedmieście 26/28, 00-927 Warszawa, NIP 525-001-12-66)
Mateusz Olko (Warsaw University, Uniwersytet Warszawski, ul. Krakowskie Przedmieście 26/28, 00-927 Warszawa, NIP 525-001-12-66.)
Michał Królikowski (University of Warsaw)
Jakub Swiatkowski (University of Warsaw)
Marcin Andrychowicz (Google DeepMind)
Łukasz Kuciński (Polish Academy of Sciences)
Piotr Miłoś (Polish Academy of Sciences, University of Oxford)
More from the Same Authors
-
2020 : Paper 44: CARLA Real Traffic Scenarios – novel training ground and benchmark for autonomous driving »
Błażej Osiński · Piotr Miłoś · Adam Jakubowski · Christopher Galias · Silviu Homoceanu -
2020 : Session A, Poster 6: Structure And Randomness In Planning And Reinforcement Learning »
Piotr Januszewski -
2021 : Off-Policy Correction For Multi-Agent Reinforcement Learning »
Michał Zawalski · Błażej Osiński · Henryk Michalewski · Piotr Miłoś -
2021 : Implicitly Regularized RL with Implicit Q-values »
Nino Vieillard · Marcin Andrychowicz · Anton Raichuk · Olivier Pietquin · Matthieu Geist -
2021 Poster: Subgoal Search For Complex Reasoning Tasks »
Konrad Czechowski · Tomasz Odrzygóźdź · Marek Zbysiński · Michał Zawalski · Krzysztof Olejnik · Yuhuai Wu · Łukasz Kuciński · Piotr Miłoś -
2021 Poster: Catalytic Role Of Noise And Necessity Of Inductive Biases In The Emergence Of Compositional Communication »
Łukasz Kuciński · Tomasz Korbak · Paweł Kołodziej · Piotr Miłoś -
2021 Poster: Continual World: A Robotic Benchmark For Continual Reinforcement Learning »
Maciej Wołczyk · Michał Zając · Razvan Pascanu · Łukasz Kuciński · Piotr Miłoś -
2021 Poster: What Matters for Adversarial Imitation Learning? »
Manu Orsini · Anton Raichuk · Leonard Hussenot · Damien Vincent · Robert Dadashi · Sertan Girgin · Matthieu Geist · Olivier Bachem · Olivier Pietquin · Marcin Andrychowicz -
2020 : Poster Session A: 3:00 AM - 4:30 AM PST »
Taras Khakhulin · Ravichandra Addanki · Jinhwi Lee · Jungtaek Kim · Piotr Januszewski · Konrad Czechowski · Francesco Landolfi · Lovro Vrček · Oren Neumann · Claudius Gros · Betty Fabre · Lukas Faber · Lucas Anquetil · Alberto Franzin · Tommaso Bendinelli · Sergey Bartunov -
2019 : Coffee + Posters »
Changhao Chen · Nils Gählert · Edouard Leurent · Johannes Lehner · Apratim Bhattacharyya · Harkirat Singh Behl · TeckYian Lim · Shiho Kim · Jelena Novosel · Błażej Osiński · Arindam Das · Ruobing Shen · Jeffrey Hawke · Joachim Sicking · Babak Shahian Jahromi · Theja Tulabandhula · Claudio Michaelis · Evgenia Rusak · WENHANG BAO · Hazem Rashed · JP Chen · Amin Ansari · Jaekwang Cha · Mohamed Zahran · Daniele Reda · Jinhyuk Kim · Kim Dohyun · Ho Suk · Junekyo Jhung · Alexander Kister · Matthias Fahrland · Adam Jakubowski · Piotr Miłoś · Jean Mercat · Bruno Arsenali · Silviu Homoceanu · Xiao-Yang Liu · Philip Torr · Ahmad El Sallab · Ibrahim Sobh · Anurag Arnab · Christopher Galias