Skip to yearly menu bar Skip to main content


Poster

Where does In-context Learning \\ Happen in Large Language Models?

Suzanna Sia · David Mueller · Kevin Duh

East Exhibit Hall A-C #2801
[ ]
Wed 11 Dec 11 a.m. PST — 2 p.m. PST

Abstract:

Self-supervised large language models have demonstrated the ability to perform various tasks via in-context learning, but little is known about where the model locates the task with respect to prompt instructions and demonstration examples.In this work, we attempt to characterize the region where large language models transition from recognizing the task to performing the task.Through a series of layer-wise context-masking experiments on \textsc{GPTNeo2.7B}, \textsc{Bloom3B}, \textsc{Llama2-7b}, \textsc{Llama2-7b-chat}, \textsc{Starcoder2-3B} and \textsc{Starcoder2-7B} on Machine Translation and Code generation, we demonstrate evidence of a "task recognition" point where the task is encoded into the input representations and attention to context is no longer necessary. We further observe correspondence between the low performance when masking out entire layers, and the task recognition layers. Taking advantage of this redundancy results in 45\% computational savings when prompting with 5 examples, and task recognition achieved at layer 14 / 32 using an example with Machine Translation.

Live content is unavailable. Log in and register to view live content