Timezone: »

DASCO: Dual-Generator Adversarial Support Constrained Offline Reinforcement Learning
Quan Vuong · Aviral Kumar · Sergey Levine · Yevgen Chebotar

Thu Dec 01 09:00 AM -- 11:00 AM (PST) @ Hall J #333

In offline RL, constraining the learned policy to remain close to the data is essential to prevent the policy from outputting out-of-distribution (OOD) actions with erroneously overestimated values. In principle, generative adversarial networks (GAN) can provide an elegant solution to do so, with the discriminator directly providing a probability that quantifies distributional shift. However, in practice, GAN-based offline RL methods have not outperformed alternative approaches, perhaps because the generator is trained to both fool the discriminator and maximize return - two objectives that are often at odds with each other. In this paper, we show that the issue of conflicting objectives can be resolved by training two generators: one that maximizes return, with the other capturing the "remainder" of the data distribution in the offline dataset, such that the mixture of the two is close to the behavior policy. We show that not only does having two generators enable an effective GAN-based offline RL method, but also approximates a support constraint, where the policy does not need to match the entire data distribution, but only the slice of the data that leads to high long term performance. We name our method DASCO, for Dual-Generator Adversarial Support Constrained Offline RL. On benchmark tasks that require learning from sub-optimal data, DASCO significantly outperforms prior methods that enforce distribution constraint.

Author Information

Quan Vuong (University of California San Diego)
Aviral Kumar (UC Berkeley)
Sergey Levine (UC Berkeley)
Yevgen Chebotar (Google)

More from the Same Authors