`

Timezone: »

Poster
Scatterbrain: Unifying Sparse and Low-rank Attention
Beidi Chen · Tri Dao · Eric Winsor · Zhao Song · Atri Rudra · Christopher Ré

Fri Dec 10 08:30 AM -- 10:00 AM (PST) @
Recent advances in efficient Transformers have exploited either the sparsity or low-rank properties of attention matrices to reduce the computational and memory bottlenecks of modeling long sequences. However, it is still challenging to balance the trade-off between model quality and efficiency to perform a one-size-fits-all approximation for different tasks. To better understand this trade-off, we observe that sparse and low-rank approximations excel in different regimes, determined by the softmax temperature in attention, and sparse + low-rank can outperform each individually. Inspired by the classical robust-PCA algorithm for sparse and low-rank decomposition, we propose Scatterbrain, a novel way to unify sparse (via locality sensitive hashing) and low-rank (via kernel feature map) attention for accurate and efficient approximation. The estimation is unbiased with provably low error. We empirically show that Scatterbrain can achieve $2.1 \times$ lower error than baselines when serving as a drop-in replacement in BigGAN image generation and pre-trained T2T-ViT. On a pre-trained T2T Vision transformer, even without fine-tuning, Scatterbrain can reduce $98\%$ of attention memory at the cost of only $1\%$ drop in accuracy. We demonstrate Scatterbrain for end-to-end training with up to $4$ points better perplexity and 5 points better average accuracy than sparse or low-rank efficient transformers on language modeling and long-range-arena tasks.

#### Author Information

##### Beidi Chen (Stanford University)

I'm a third year Ph.D. Student at Rice University and working with Dr. Anshumali Shrivastava. My research topic is hashing in large-scale learning. I work closely with Dr. Rebecca Steorts on Record Linkage. I had my undergrad in Berkeley and my Advisor was Randy Katz. My topic was data mining.