Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Instruction Tuning and Instruction Following

Supervised Fine-Tuning of Large Language Models on Human Demonstrations Through the Lens of Memorization

Yubin Ge · Devamanyu Hazarika · Yang Liu · Mahdi Namazifar

Keywords: [ Large language models ] [ Supervised Fine-tuning ] [ memorization ]


Abstract:

In recent years, the field of natural language processing (NLP) has witnessed remarkable advancements driven by the development of large language models (LLMs). Various techniques, such as instruction tuning, have emerged as crucial approaches, enhancing LLMs' adaptability to new tasks guided by instructional prompts. Meanwhile, the phenomenon of memorization within LLMs has garnered considerable attention. In this work, we delve into memorization within LLMs during supervised fine-tuning on human demonstrations and find a distinct pattern marked by initial memorization growth followed by stabilization, with different degrees of memorization observed across various tasks. An intriguing observation is the increase in validation perplexity, typically indicative of overfitting, does not result in lower generation quality. We probe deeper by examining the entropy derived from LLM's output probabilities, uncovering a consistent trend of decreasing entropy throughout training under both nucleus sampling and teacher forcing scenarios. This implies growing confidence within the LLM in generating output, while such output may deviate from the expected ground truth. Building upon our investigation, we propose a novel Memorization-Based Curriculum (MBC) learning approach. We leverage likelihood as a proxy for measuring memorization and employ it to construct a data distribution for sampling instances with replacement during supervised fine-tuning, emphasizing data with lower degrees of memorization. Evaluations using GPT-4 as a judge demonstrate the effectiveness of MBC in fine-tuning LLMs on human demonstrations.

Chat is not available.