Skip to yearly menu bar Skip to main content


Poster

Offline Behavior Distillation

Shiye Lei · Sen Zhang · Dacheng Tao

East Exhibit Hall A-C #1809
[ ] [ Project Page ]
Thu 12 Dec 11 a.m. PST — 2 p.m. PST

Abstract: Massive reinforcement learning (RL) data are typically collected to train policies offline without the need for interactions, but the large data volume can lead to inefficiencies in training. To tackle this issue, we formulate offline behavior distillation (OBD), which synthesizes limited expert behavioral data from sub-optimal RL data, thereby enabling rapid policy learning. We propose two naive OBD objectives, DBC and PBC, which measure distillation performance via the decision difference between policies trained on distilled data and either offline data or a near-expert policy. Due to intractable bi-level optimization, the OBD objective is difficult to minimize to small values, which deteriorates naive OBD objectives by their distillation performance guarantees with quadratic discount complexity $\mathcal{O}(1/(1-\gamma)^2)$. We theoretically analyze the performance of policies trained on distilled data and prove the equivalence between the policy performance and action-value weighted decision difference. Building on our theoretical insights, we introduce action-value weighted PBC (Av-PBC) as the OBD objective. By optimizing the weighted decision difference, Av-PBC achieves a superior distillation guarantee with linear discount complexity $\mathcal{O}(1/(1-\gamma))$, which better tolerates limited optimization compared to the naive objectives. Extensive experiments on multiple D4RL datasets reveal that Av-PBC offers significant improvements in OBD performance, fast distillation convergence speed, and stable cross-architecture/optimizer generalization.

Live content is unavailable. Log in and register to view live content