Skip to yearly menu bar Skip to main content


Poster

Scaling Laws for Reward Model Overoptimization in Direct Alignment Algorithms

Rafael Rafailov · Yaswanth Chittepu · Ryan Park · Harshit Sushil Sikchi · Joey Hejna · Brad Knox · Chelsea Finn · Scott Niekum

West Ballroom A-D #6503
[ ]
Fri 13 Dec 11 a.m. PST — 2 p.m. PST

Abstract:

Reinforcement Learning from Human Feedback (RLHF)has been crucial to the recent success of Large Language Models (LLMs), however it is often a complex and brittle process. In the classical RLHF framework, a reward model is first trained to represent human preferences, which is in turn used by an online reinforcement learning (RL) algorithm to optimized the LLM. A prominent issue with such methods is reward over-optimization or reward hacking, where the performance as measured by the learned proxy reward model increases, but the true model quality plateaus or even deteriorates. Direct Alignment Algorithms (DDAs), such as Direct Preference Optimization (DPO) have emerged as alternatives to the classical RLHF pipeline. However, despite not training a separate proxy reward model or using RL, they still commonly deteriorate from over-optimization. While the so-called reward hacking phenomenon is not well-defined for DAAs, we still uncover similar trends: at higher KL-budgets, DAA algorithms exhibit similar degradation patters to their classic RLHF counterparts. In particular, we find that DAA methods deteriorate not only across a wide range of KL-budgets, but also often before even a single epoch of the dataset is completed. Through extensive empirical experimentation this work formulates the reward over-optimization or hacking problem for DAAs and explores its consequences across objectives, training regimes, and model scales.

Live content is unavailable. Log in and register to view live content