Skip to yearly menu bar Skip to main content


Spotlight Poster

In-Context Learning with Transformers: Softmax Attention Adapts to Function Lipschitzness

Liam Collins · Advait Parulekar · Aryan Mokhtari · Sujay Sanghavi · Sanjay Shakkottai

[ ]
Fri 13 Dec 11 a.m. PST — 2 p.m. PST

Abstract:

A striking property of transformers is their ability to perform in-context learning (ICL), a machine learning framework in which the learner is presented with a novel context during inference implicitly through some data, and tasked with making a prediction in that context. As such, that learner must adapt to the context without additional training. We explore the role of softmax attention in an ICL setting where each context encodes a regression task. We show that an attention unit learns a window that it uses to implement a nearest-neighbors predictor adapted to the landscape of the pretraining tasks. Specifically, we show that this window widens with decreasing Lipschitzness and increasing label noise in the pretraining tasks. We also show that on low-rank, linear problems, the attention unit learns to project onto the appropriate subspace before inference. Further, we show that this adaptivity relies crucially on the softmax activation and thus cannot be replicated by the linear activation often studied in prior theoretical analyses.

Live content is unavailable. Log in and register to view live content