Timezone: »
We investigate the training of sparse layers that use different parameters for different inputs based on hashing in large Transformer models. Specifically, we modify the feedforward layer to hash to different sets of weights depending on the current token, over all tokens in the sequence. We show that this procedure either outperforms or is competitive with learning-to-route mixture-of-expert methods such as Switch Transformers and BASE Layers, while requiring no routing parameters or extra terms in the objective function such as a load balancing loss, and no sophisticated assignment algorithm. We study the performance of different hashing techniques, hash sizes and input features, and show that balanced and random hashes focused on the most local features work best, compared to either learning clusters or using longer-range context. We show our approach works well both on large language modeling and dialogue tasks, and on downstream fine-tuning tasks.
Author Information
Stephen Roller (Facebook)
Sainbayar Sukhbaatar (New York University)
arthur szlam (Facebook)
Jason Weston (Facebook AI Research)
Related Events (a corresponding poster, oral, or spotlight)
-
2021 Poster: Hash Layers For Large Sparse Models »
Fri. Dec 10th 04:30 -- 06:00 PM Room
More from the Same Authors
-
2021 : IGLU: Interactive Grounded Language Understanding in a Collaborative Environment + Q&A »
· Ziming Li · Mohammad Aliannejadi · Maartje Anne ter Hoeve · Mikhail Burtsev · Alexey Skrynnik · Artem Zholus · Aleksandr Panov · Katja Hofmann · Kavya Srinet · arthur szlam · Michel Galley · Ahmed Awadallah -
2020 Workshop: Wordplay: When Language Meets Games »
Prithviraj Ammanabrolu · Matthew Hausknecht · Xingdi Yuan · Marc-Alexandre Côté · Adam Trischler · Kory Mathewson @korymath · John Urbanek · Jason Weston · Mark Riedl -
2018 : Humans and models as embodied dialogue agents in text-based games »
Jason Weston -
2018 : The Conversational Intelligence Challenge 2 (ConvAI2) : Setup, Opening Words »
Jason Weston -
2017 Tutorial: Geometric Deep Learning on Graphs and Manifolds »
Michael Bronstein · Joan Bruna · arthur szlam · Xavier Bresson · Yann LeCun -
2016 Poster: The Product Cut »
Thomas Laurent · James von Brecht · Xavier Bresson · arthur szlam -
2016 Poster: Learning Multiagent Communication with Backpropagation »
Sainbayar Sukhbaatar · arthur szlam · Rob Fergus -
2015 Poster: End-To-End Memory Networks »
Sainbayar Sukhbaatar · arthur szlam · Jason Weston · Rob Fergus -
2015 Oral: End-To-End Memory Networks »
Sainbayar Sukhbaatar · arthur szlam · Jason Weston · Rob Fergus -
2015 Poster: Deep Generative Image Models using a Laplacian Pyramid of Adversarial Networks »
Emily Denton · Soumith Chintala · arthur szlam · Rob Fergus