Skip to yearly menu bar Skip to main content


Poster

Transformers to SSMs: Distilling Quadratic Knowledge to Subquadratic Models

Aviv Bick · Kevin Li · Eric Xing · J. Zico Kolter · Albert Gu

East Exhibit Hall A-C #4704
[ ]
Wed 11 Dec 4:30 p.m. PST — 7:30 p.m. PST

Abstract:

Transformer architectures have become a dominant paradigm for domains like language modeling, but suffer in many inference settings due to their quadratic-time self-attention. Recently proposed sub-quadratic architectures such as Mamba have shown promise, but have been pretrained with substantially less computational resources than the strongest Transformer models. In this work, we present a method that is able to distill a pre-trained Transformer architecture into alternative architectures such as state space models (SSMs). The key idea to our approach is that we can view both Transformers and SSMs as applying different forms of mixing matrices over the token sequences. We can thus progressively distill the Transformer architecture by matching different degrees of granularity in the SSM: first matching the mixing matrices themselves, then the hidden units at each block, then the end-to-end predictions. Our method, termed MOHAWK, is able to distill a Mamba-2 variant based upon the Phi-1.5 architecture (Phi-Mamba), using less than 3B tokens. Despite using less than 1\% of the training data typically used to train models from scratch, we demonstrate substantially stronger performance than all past open source non-Transformer models. This demonstrates that models such as SSMs can leverage computational resources invested in training Transformer-based architectures, highlighting a new avenue for building such models.

Live content is unavailable. Log in and register to view live content