Skip to yearly menu bar Skip to main content


Poster

Diversity-Driven Exploration Strategy for Deep Reinforcement Learning

Zhang-Wei Hong · Tzu-Yun Shann · Shih-Yang Su · Yi-Hsiang Chang · Tsu-Jui Fu · Chun-Yi Lee

Room 517 AB #135

Keywords: [ Exploration ] [ Reinforcement Learning ]


Abstract:

Efficient exploration remains a challenging research problem in reinforcement learning, especially when an environment contains large state spaces, deceptive local optima, or sparse rewards. To tackle this problem, we present a diversity-driven approach for exploration, which can be easily combined with both off- and on-policy reinforcement learning algorithms. We show that by simply adding a distance measure to the loss function, the proposed methodology significantly enhances an agent's exploratory behaviors, and thus preventing the policy from being trapped in local optima. We further propose an adaptive scaling method for stabilizing the learning process. We demonstrate the effectiveness of our method in huge 2D gridworlds and a variety of benchmark environments, including Atari 2600 and MuJoCo. Experimental results show that our method outperforms baseline approaches in most tasks in terms of mean scores and exploration efficiency.

Live content is unavailable. Log in and register to view live content