Skip to yearly menu bar Skip to main content


Poster

Calibrated Preference Optimization for Direct Language Model Alignment

Teng Xiao · Yige Yuan · Huaisheng Zhu · Mingxiao Li · Vasant Honavar

West Ballroom A-D #6905
[ ]
Thu 12 Dec 11 a.m. PST — 2 p.m. PST

Abstract:

We consider the problem of aligning large language models (LM) with human preference data. Contrastive preference optimization has shown promising results in aligning LM with available preference data by optimizing the implicit reward associated with the policy. However, the contrastive objective focuses mainly on the relative values of implicit rewards associated with two responses while ignoring their actual values, resulting in suboptimal alignment with human preferences. To address this limitation, we propose calibrated direct preference optimization (Cal-DPO), a simple yet effective algorithm. We show that substantial improvement in LLM alignment with the given preferences can be achieved simply by calibrating the implicit reward to ensure that the learned implicit rewards are comparable in scale to the ground-truth rewards. We demonstrate the theoretical advantages of Cal-DPO over existing approaches. The results of our experiments on a variety of standard benchmarks show that Cal-DPO remarkably improves off-the-shelf methods, achieving, for example, a relative improvement of 13.9\% in GSM8K on DPO.

Live content is unavailable. Log in and register to view live content