Timezone: »

Return Augmentation gives Supervised RL Temporal Compositionality
Keiran Paster · Silviu Pitis · Sheila McIlraith · Jimmy Ba
Event URL: https://openreview.net/forum?id=q5olkWCt7nl »

Offline Reinforcement Learning (RL) methods that use supervised learning or sequence modeling (e.g., Decision Transformer) work by training a return-conditioned policy. A fundamental limitation of these approaches, as compared to value-based methods, is that they have trouble generalizing to behaviors that have a higher return than what was seen at training. Value-based offline-RL algorithms like CQL use bootstrapping to combine training data from multiple trajectories to learn strong behaviors from sub-optimal data. We set out to endow RL via Supervised Learning (RvS) methods with this form of temporal compositionality. To do this, we introduce SuperB, a dynamic programming algorithm for data augmentation that augments the returns in the offline dataset by combining rewards from intersecting trajectories. We show theoretically that SuperB can improve sample complexity and enable RvS to find optimal policies in cases where it previously fell behind the performance of value-based methods. Empirically, we find that SuperB improves the performance of RvS in several offline RL environments, surpassing the prior state-of-the-art RvS agents in AntMaze by orders of magnitude and offering performance competitive with value-based algorithms on the D4RL-gym tasks.

Author Information

Keiran Paster (University of Toronto)
Silviu Pitis (University of Toronto)
Sheila McIlraith (University of Toronto and Vector Institute)
Jimmy Ba (University of Toronto / Vector Institute)

More from the Same Authors