Timezone: »

EX2: Exploration with Exemplar Models for Deep Reinforcement Learning
Justin Fu · John Co-Reyes · Sergey Levine

Wed Dec 06 05:30 PM -- 05:35 PM (PST) @ Hall A

Deep reinforcement learning algorithms have been shown to learn complex tasks using highly general policy classes. However, sparse reward problems remain a significant challenge. Exploration methods based on novelty detection have been particularly successful in such settings but typically require generative or predictive models of the observations, which can be difficult to train when the observations are very high-dimensional and complex, as in the case of raw images. We propose a novelty detection algorithm for exploration that is based entirely on discriminatively trained exemplar models, where classifiers are trained to discriminate each visited state against all others. Intuitively, novel states are easier to distinguish against other states seen during training. We show that this kind of discriminative modeling corresponds to implicit density estimation, and that it can be combined with count-based exploration to produce competitive results on a range of popular benchmark tasks, including state-of-the-art results on challenging egocentric observations in the vizDoom benchmark.

Author Information

Justin Fu (UC Berkeley)
John Co-Reyes (UC Berkeley)

Interested in solving intelligence. Currently working on hierarchical reinforcement learning and learning a physical intuition of the world.

Sergey Levine (UC Berkeley)

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors