Skip to yearly menu bar Skip to main content


Spotlight Poster

Tracr: Compiled Transformers as a Laboratory for Interpretability

David Lindner · Janos Kramar · Sebastian Farquhar · Matthew Rahtz · Tom McGrath · Vladimir Mikulik

Great Hall & Hall B1+B2 (level 1) #1516
[ ] [ Project Page ]
[ Paper [ Poster [ OpenReview
Thu 14 Dec 3 p.m. PST — 5 p.m. PST

Abstract:

We show how to "compile" human-readable programs into standard decoder-only transformer models. Our compiler, Tracr, generates models with known structure. This structure can be used to design experiments. For example, we use it to study "superposition" in transformers that execute multi-step algorithms. Additionally, the known structure of Tracr-compiled models can serve as ground-truth for evaluating interpretability methods. Commonly, because the "programs" learned by transformers are unknown it is unclear whether an interpretation succeeded. We demonstrate our approach by implementing and examining programs including computing token frequencies, sorting, and parenthesis checking. We provide an open-source implementation of Tracr at https://github.com/google-deepmind/tracr.

Chat is not available.