Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Instruction Tuning and Instruction Following

Data-Efficient Alignment of Large Language Models with Human Feedback Through Natural Language

Di Jin · Shikib Mehri · Devamanyu Hazarika · Aishwarya Padmakumar · SUNGJIN LEE · Yang Liu · Mahdi Namazifar

Keywords: [ Critique and Revision ] [ large language model ] [ alignment ] [ Learning from Human Feedback ]


Abstract:

Learning from human feedback is a prominent technique to align the output of large language models (LLMs) with human expectations. Reinforcement learning from human feedback (RLHF) leverages human preference signals that are in the form of ranking of response pairs to perform this alignment. However, human preference on LLM outputs can come in much richer forms including natural language, which may provide detailed feedback on strengths and weaknesses of a given response. In this work we investigate data efficiency of modeling human feedback that is in natural language. Specifically, we fine-tune an open-source LLM, e.g., Falcon-40B-Instruct, on a relatively small amount (1000 records or even less) of human feedback in natural language in the form of critiques and revisions of responses. We show that this model is able to improve the quality of responses from even some of the strongest LLMs such as ChatGPT, BARD, and Vicuna, through critique and revision of those responses. For instance, through one iteration of revision of ChatGPT responses, the revised responses have 56.6% win rate over the original ones, and this win rate can be further improved to 65.9% after applying the revision for five iterations.

Chat is not available.