Timezone: »
We introduce the Block-Recurrent Transformer, which applies a transformer layer in a recurrent fashion along a sequence, and has linear complexity with respect to sequence length. Our recurrent cell operates on blocks of tokens rather than single tokens during training, and leverages parallel computation within a block in order to make efficient use of accelerator hardware. The cell itself is strikingly simple. It is merely a transformer layer: it uses self-attention and cross-attention to efficiently compute a recurrent function over a large set of state vectors and tokens. Our design was inspired in part by LSTM cells, and it uses LSTM-style gates, but it scales the typical LSTM cell up by several orders of magnitude. Our implementation of recurrence has the same cost in both computation time and parameter count as a conventional transformer layer, but offers dramatically improved perplexity in language modeling tasks over very long sequences. Our model out-performs a long-range Transformer XL baseline by a wide margin, while running twice as fast. We demonstrate its effectiveness on PG19 (books), arXiv papers, and GitHub source code. Our code has been released as open source.
Author Information
DeLesley Hutchins (Google)
DeLesley Hutchins received his PhD from the University of Edinburgh, under the direction of Philip Wadler, with a dissertation on programming language semantics and type theory. DeLesley joined Google in 2011, and worked on the C++ compiler team for several years, developing Clang Thread Safety Analysis. He transferred to Google Research in 2015, and currently works with the n2formal team at Google on improvements to transformers.
Imanol Schlag (IDSIA)
Yuhuai Wu (Google)
Ethan Dyer (Blueshift, Google Research)
Behnam Neyshabur (Google)
More from the Same Authors
-
2021 : Augmenting Classic Algorithms with Neural Components for Strong Generalisation on Ambiguous and High-Dimensional Data »
Imanol Schlag · Jürgen Schmidhuber -
2021 : Improving Baselines in the Wild »
Kazuki Irie · Imanol Schlag · Róbert Csordás · Jürgen Schmidhuber -
2021 : A Modern Self-Referential Weight Matrix That Learns to Modify Itself »
Kazuki Irie · Imanol Schlag · Róbert Csordás · Jürgen Schmidhuber -
2022 : Teaching Algorithmic Reasoning via In-context Learning »
Hattie Zhou · Azade Nova · aaron courville · Hugo Larochelle · Behnam Neyshabur · Hanie Sedghi -
2022 : Fast and Precise: Adjusting Planning Horizon with Adaptive Subgoal Search »
Michał Zawalski · Michał Tyrolski · Konrad Czechowski · Damian Stachura · Piotr Piękos · Tomasz Odrzygóźdź · Yuhuai Wu · Łukasz Kuciński · Piotr Miłoś -
2022 : MATH-AI: Toward Human-Level Mathematical Reasoning »
Francois Charton · Noah Goodman · Behnam Neyshabur · Talia Ringer · Daniel Selsam -
2022 : Teaching Algorithmic Reasoning via In-context Learning »
Hattie Zhou · Azade Nova · aaron courville · Hugo Larochelle · Behnam Neyshabur · Hanie Sedghi -
2022 : Panel Discussion »
Behnam Neyshabur · David Sontag · Pradeep Ravikumar · Erin Hartman -
2022 : Length Generalization in Quantitative Reasoning »
Behnam Neyshabur -
2022 Workshop: MATH-AI: Toward Human-Level Mathematical Reasoning »
Pan Lu · Swaroop Mishra · Sean Welleck · Yuhuai Wu · Hannaneh Hajishirzi · Percy Liang -
2022 Poster: Autoformalization with Large Language Models »
Yuhuai Wu · Albert Q. Jiang · Wenda Li · Markus Rabe · Charles Staats · Mateja Jamnik · Christian Szegedy -
2022 Poster: Insights into Pre-training via Simpler Synthetic Tasks »
Yuhuai Wu · Felix Li · Percy Liang -
2022 Poster: Thor: Wielding Hammers to Integrate Language Models and Automated Theorem Provers »
Albert Q. Jiang · Wenda Li · Szymon Tworkowski · Konrad Czechowski · Tomasz Odrzygóźdź · Piotr Miłoś · Yuhuai Wu · Mateja Jamnik -
2022 Poster: STaR: Bootstrapping Reasoning With Reasoning »
Eric Zelikman · Yuhuai Wu · Jesse Mu · Noah Goodman -
2022 Poster: Exploring Length Generalization in Large Language Models »
Cem Anil · Yuhuai Wu · Anders Andreassen · Aitor Lewkowycz · Vedant Misra · Vinay Ramasesh · Ambrose Slone · Guy Gur-Ari · Ethan Dyer · Behnam Neyshabur -
2022 Poster: Revisiting Neural Scaling Laws in Language and Vision »
Ibrahim Alabdulmohsin · Behnam Neyshabur · Xiaohua Zhai -
2022 Poster: Solving Quantitative Reasoning Problems with Language Models »
Aitor Lewkowycz · Anders Andreassen · David Dohan · Ethan Dyer · Henryk Michalewski · Vinay Ramasesh · Ambrose Slone · Cem Anil · Imanol Schlag · Theo Gutman-Solo · Yuhuai Wu · Behnam Neyshabur · Guy Gur-Ari · Vedant Misra -
2022 Poster: Path Independent Equilibrium Models Can Better Exploit Test-Time Computation »
Cem Anil · Ashwini Pokle · Kaiqu Liang · Johannes Treutlein · Yuhuai Wu · Shaojie Bai · J. Zico Kolter · Roger Grosse -
2021 Poster: Going Beyond Linear Transformers with Recurrent Fast Weight Programmers »
Kazuki Irie · Imanol Schlag · Róbert Csordás · Jürgen Schmidhuber -
2019 : Poster Session + Lunch »
Maxwell Nye · Robert Kim · Toby St Clere Smithe · Takeshi D. Itoh · Omar U. Florez · Vesna G. Djokic · Sneha Aenugu · Mariya Toneva · Imanol Schlag · Dan Schwartz · Max Raphael Sobroza Marques · Pravish Sainath · Peng-Hsuan Li · Rishi Bommasani · Najoung Kim · Paul Soulos · Steven Frankland · Nadezhda Chirkova · Dongqi Han · Adam Kortylewski · Rich Pang · Milena Rabovsky · Jonathan Mamou · Vaibhav Kumar · Tales Marra -
2018 Poster: Learning to Reason with Third Order Tensor Products »
Imanol Schlag · Jürgen Schmidhuber