Skip to yearly menu bar Skip to main content


Unsupervised Behavior Extraction via Random Intent Priors

Hao Hu · Yiqin Yang · Jianing Ye · Ziqing Mai · Chongjie Zhang

Great Hall & Hall B1+B2 (level 1) #2003
[ ]
Tue 12 Dec 3:15 p.m. PST — 5:15 p.m. PST


Reward-free data is abundant and contains rich prior knowledge of human behaviors, but it is not well exploited by offline reinforcement learning (RL) algorithms. In this paper, we propose UBER, an unsupervised approach to extract useful behaviors from offline reward-free datasets via diversified rewards. UBER assigns different pseudo-rewards sampled from a given prior distribution to different agents to extract a diverse set of behaviors, and reuse them as candidate policies to facilitate the learning of new tasks. Perhaps surprisingly, we show that rewards generated from random neural networks are sufficient to extract diverse and useful behaviors, some even close to expert ones. We provide both empirical and theoretical evidences to justify the use of random priors for the reward function. Experiments on multiple benchmarks showcase UBER's ability to learn effective and diverse behavior sets that enhance sample efficiency for online RL, outperforming existing baselines. By reducing reliance on human supervision, UBER broadens the applicability of RL to real-world scenarios with abundant reward-free data.

Chat is not available.