Skip to yearly menu bar Skip to main content


Contributed Talk
in
Workshop: Generalization in Planning (GenPlan '23)

RL3: Boosting Meta Reinforcement Learning via RL inside RL2

Abhinav Bhatia · Samer Nashed · Shlomo Zilberstein

Keywords: [ Meta Reinforcement Learning ]

[ ] [ Project Page ]
Sat 16 Dec 2:35 p.m. PST — 2:45 p.m. PST

Abstract: Meta reinforcement learning (meta-RL) methods such as RL$^2$ have emerged as promising approaches for learning data-efficient RL algorithms tailored to a given task distribution. However, these RL algorithms struggle with long-horizon tasks and out-of-distribution tasks since they rely on recurrent neural networks to process the sequence of experiences instead of summarizing them into general RL components such as value functions. Moreover, even transformers have a practical limit to the length of histories they can efficiently reason about before training and inference costs become prohibitive. In contrast, traditional RL algorithms are data-inefficient since they do not leverage domain knowledge, but they do converge to an optimal policy as more data becomes available. In this paper, we propose RL$^3$, a principled hybrid approach that combines traditional RL and meta-RL by incorporating task-specific action-values learned through traditional RL as an input to the meta-RL neural network. We show that RL$^3$ earns greater cumulative reward on long-horizon and out-of-distribution tasks compared to RL$^2$, while maintaining the efficiency of the latter in the short term. Experiments are conducted on both custom and benchmark discrete domains from the meta-RL literature that exhibit a range of short-term, long-term, and complex dependencies.

Chat is not available.