Timezone: »

Knowledge Transfer in Multi-Task Deep Reinforcement Learning for Continuous Control
Zhiyuan Xu · Kun Wu · Zhengping Che · Jian Tang · Jieping Ye

Thu Dec 10 09:00 AM -- 11:00 AM (PST) @ Poster Session 5 #1510

While Deep Reinforcement Learning (DRL) has emerged as a promising approach to many complex tasks, it remains challenging to train a single DRL agent that is capable of undertaking multiple different continuous control tasks. In this paper, we present a Knowledge Transfer based Multi-task Deep Reinforcement Learning framework (KTM-DRL) for continuous control, which enables a single DRL agent to achieve expert-level performance in multiple different tasks by learning from task-specific teachers. In KTM-DRL, the multi-task agent first leverages an offline knowledge transfer algorithm designed particularly for the actor-critic architecture to quickly learn a control policy from the experience of task-specific teachers, and then it employs an online learning algorithm to further improve itself by learning from new online transition samples under the guidance of those teachers. We perform a comprehensive empirical study with two commonly-used benchmarks in the MuJoCo continuous control task suite. The experimental results well justify the effectiveness of KTM-DRL and its knowledge transfer and online learning algorithms, as well as its superiority over the state-of-the-art by a large margin.

Author Information

Zhiyuan Xu (Syracuse University)

I am currently majoring in ``computer science`` and pursuing the Ph.D. degree at the Department of Electrical Engineering and Computer Science, ``Syracuse University``.

Kun Wu (Syracuse University)
Zhengping Che (DiDi AI Labs, Didi Chuxing)
Jian Tang (DiDi AI Labs, DiDi Chuxing)
Jieping Ye (Didi Chuxing)

More from the Same Authors