Skip to yearly menu bar Skip to main content


Poster

EX2: Exploration with Exemplar Models for Deep Reinforcement Learning

Justin Fu · John Co-Reyes · Sergey Levine

Pacific Ballroom #3

Keywords: [ Reinforcement Learning ] [ Deep Learning ]


Abstract:

Deep reinforcement learning algorithms have been shown to learn complex tasks using highly general policy classes. However, sparse reward problems remain a significant challenge. Exploration methods based on novelty detection have been particularly successful in such settings but typically require generative or predictive models of the observations, which can be difficult to train when the observations are very high-dimensional and complex, as in the case of raw images. We propose a novelty detection algorithm for exploration that is based entirely on discriminatively trained exemplar models, where classifiers are trained to discriminate each visited state against all others. Intuitively, novel states are easier to distinguish against other states seen during training. We show that this kind of discriminative modeling corresponds to implicit density estimation, and that it can be combined with count-based exploration to produce competitive results on a range of popular benchmark tasks, including state-of-the-art results on challenging egocentric observations in the vizDoom benchmark.

Live content is unavailable. Log in and register to view live content