Poster
Scalable Coordinated Exploration in Concurrent Reinforcement Learning
Maria Dimakopoulou · Ian Osband · Benjamin Van Roy
Room 517 AB #155
Keywords: [ Reinforcement Learning ] [ Exploration ]
We consider a team of reinforcement learning agents that concurrently operate in a common environment, and we develop an approach to efficient coordinated exploration that is suitable for problems of practical scale. Our approach builds on the seed sampling concept introduced in Dimakopoulou and Van Roy (2018) and on a randomized value function learning algorithm from Osband et al. (2016). We demonstrate that, for simple tabular contexts, the approach is competitive with those previously proposed in Dimakopoulou and Van Roy (2018) and with a higher-dimensional problem and a neural network value function representation, the approach learns quickly with far fewer agents than alternative exploration schemes.
Live content is unavailable. Log in and register to view live content