Skip to yearly menu bar Skip to main content


San Diego Oral

The emergence of sparse attention: impact of data distribution and benefits of repetition

Nicolas Zucchet · Francesco D'Angelo · Andrew Lampinen · Stephanie Chan

Upper Level Ballroom 6AB
Wed 3 Dec 3:50 p.m. PST — 4:10 p.m. PST

Abstract:

Emergence is a fascinating property of large language models and neural networks more broadly: as models scale and train for longer, they sometimes develop new abilities in sudden ways. Despite initial studies, we still lack a comprehensive understanding of how and when these abilities emerge. To address this gap, we study the emergence over training of sparse attention, a critical and frequently observed attention pattern in Transformers. By combining theoretical analysis of a toy model with empirical observations on small Transformers trained on a linear regression variant, we uncover the mechanics driving sparse attention emergence and reveal that emergence timing follows power laws based on task structure, architecture, and optimizer choice. We additionally find that repetition can greatly speed up emergence. Finally, we confirm these results on a well-studied in-context associative recall task. Our findings provide a simple, theoretically grounded framework for understanding how data distributions and model design influence the learning dynamics behind one form of emergence.

Chat is not available.