`

Timezone: »

 
Continuous Control with Action Quantization from Demonstrations
Robert Dadashi · Leonard Hussenot · Damien Vincent · Anton Raichuk · Matthieu Geist · Olivier Pietquin
Event URL: https://openreview.net/forum?id=zabVCGfJvbr »

In Reinforcement Learning (RL), discrete actions, as opposed to continuous actions, result in less complex exploration problems and the immediate computation of the maximum of the action-value function which is central to dynamic programming-based methods. In this paper, we propose a novel method: Action Quantization from Demonstrations (AQuaDem) to learn a discretization of continuous action spaces by leveraging the priors of demonstrations. This dramatically reduces the exploration problem, since the actions faced by the agent not only are in a finite number but also are plausible in light of the demonstrator’s behavior. By discretizing the action space we can apply any discrete action deep RL algorithm to the continuous control problem. We evaluate the proposed method on three different setups: RL with demonstrations, RL with play data --demonstrations of a human playing in an environment but not solving any specific task-- and Imitation Learning. For all three setups, we only consider human data, which is more challenging than synthetic data. We found that AQuaDem consistently outperforms state-of-the-art continuous control methods, both in terms of performance and sample efficiency.

Author Information

Robert Dadashi (Google Brain)
Leonard Hussenot (Google Research, Brain Team)
Damien Vincent (Google Brain)
Anton Raichuk (Google)
Matthieu Geist (Université de Lorraine)
Olivier Pietquin (Googlel Brain)

More from the Same Authors