Skip to yearly menu bar Skip to main content


Poster

Transferring Expectations in Model-based Reinforcement Learning

Trung T Nguyen · Tomi Silander · Tze Yun Leong

Harrah’s Special Events Center 2nd Floor

Abstract:

We study how to automatically select and adapt multiple abstractions or representations of the world to support model-based reinforcement learning. We address the challenges of transfer learning in heterogeneous environments with varying tasks. We present an efficient, online framework that, through a sequence of tasks, learns a set of relevant representations to be used in future tasks. Without pre-defined mapping strategies, we introduce a general approach to support transfer learning across different state spaces. We demonstrate the potential impact of our system through improved jumpstart and faster convergence to near optimum policy in two benchmark domains.

Live content is unavailable. Log in and register to view live content