Skip to yearly menu bar Skip to main content


Poster

Deep Policy Gradient Without Batch Updates or a Replay Buffer

Gautham Vasan · Mohamed Elsayed · Seyed Alireza Azimi · Jiamin He · Fahim Shahriar · Colin Bellinger · Martha White · Rupam Mahmood

West Ballroom A-D #6310
[ ] [ Project Page ]
Fri 13 Dec 11 a.m. PST — 2 p.m. PST

Abstract:

Modern deep policy gradient methods achieve effective performance on simulated robotic tasks, but they all require large replay buffers or expensive batch updates, or both, making them incompatible for real robots with resource-limited computers. We show that these methods fail catastrophically when limited to small replay buffers or \emph{incremental learning}, where updates only use the most recent sample without batch updates or a replay buffer. We propose a novel incremental deep policy gradient method --- Action Value Gradient (AVG) and a set of normalization and scaling techniques to address the challenges of instability in incremental learning. On standard robotic benchmark tasks, we show that AVG is the only incremental method that learns effectively, often achieving final performance comparable to batch policy gradient methods. This advancement enabled us to show for the first time effective deep reinforcement learning with real robots using only incremental updates, employing a robotic manipulator and a mobile robot.

Live content is unavailable. Log in and register to view live content