Timezone: »

Faster Deep Reinforcement Learning with Slower Online Network
Kavosh Asadi · Rasool Fakoor · Omer Gottesman · Taesup Kim · Michael Littman · Alexander Smola

Wed Nov 30 02:00 PM -- 04:00 PM (PST) @ Hall J #517

Deep reinforcement learning algorithms often use two networks for value function optimization: an online network, and a target network that tracks the online network with some delay. Using two separate networks enables the agent to hedge against issues that arise when performing bootstrapping. In this paper we endow two popular deep reinforcement learning algorithms, namely DQN and Rainbow, with updates that incentivize the online network to remain in the proximity of the target network. This improves the robustness of deep reinforcement learning in presence of noisy updates. The resultant agents, called DQN Pro and Rainbow Pro, exhibit significant performance improvements over their original counterparts on the Atari benchmark demonstrating the effectiveness of this simple idea in deep reinforcement learning. The code for our paper is available here: Github.com/amazon-research/fast-rl-with-slow-updates.

Author Information

Kavosh Asadi (Amazon)
Rasool Fakoor (Amazon Web Services)
Omer Gottesman
Taesup Kim (Seoul National University)
Michael Littman (Brown University)
Alexander Smola (Amazon)

**AWS Machine Learning**

More from the Same Authors