Skip to yearly menu bar Skip to main content


Poster

Catastrophic Goodhart: regularizing RLHF with KL divergence does not mitigate heavy-tailed reward misspecification

Thomas Kwa · Adrià Garriga-Alonso

East Exhibit Hall A-C #2208
[ ]
Wed 11 Dec 4:30 p.m. PST — 7:30 p.m. PST

Abstract:

When applying reinforcement learning from human feedback (RLHF), the reward is learned from data, and therefore always has some error. It is common to mitigate this by regularizing the policy by KL divergence from a base model, with the hope that by balancing reward with regularization we can achieve desirable outcomes despite this reward misspecification. We show that when the reward function has light-tailed error, the optimal policies under less restrictive KL penalties achieve arbitrarily high utility. However, if error is heavy-tailed, some policies obtain arbitrarily high reward despite achieving no more utility than the base model—a phenomenon we call catastrophic Goodhart. We adapt a discrete optimization method developed for adversarial attacks to measure the tails of open-source reward models, finding that they are consistent with light-tailed error. However, the pervasiveness of heavy-tailed distributions in many real-world applications indicates that future sources of RL reward could have heavy-tailed error, increasing the likelihood of reward hacking even with KL regularization.

Live content is unavailable. Log in and register to view live content