Timezone: »
We introduce a neural network with a recurrent attention model over a possibly large external memory. The architecture is a form of Memory Network (Weston et al., 2015) but unlike the model in that work, it is trained end-to-end, and hence requires significantly less supervision during training, making it more generally applicable in realistic settings. It can also be seen as an extension of RNNsearch to the case where multiple computational steps (hops) are performed per output symbol. The flexibility of the model allows us to apply it to tasks as diverse as (synthetic) question answering and to language modeling. For the former our approach is competitive with Memory Networks, but with less supervision. For the latter, on the Penn TreeBank and Text8 datasets our approach demonstrates comparable performance to RNNs and LSTMs. In both cases we show that the key concept of multiple computational hops yields improved results.
Author Information
Sainbayar Sukhbaatar (New York University)
arthur szlam (Facebook)
Jason Weston (Facebook AI Research)
Rob Fergus (Facebook AI Research)
More from the Same Authors
-
2021 Spotlight: Hash Layers For Large Sparse Models »
Stephen Roller · Sainbayar Sukhbaatar · arthur szlam · Jason Weston -
2022 : Learning to Reason and Memorize with Self-Questioning »
Jack Lanchantin · Shubham Toshniwal · Jason E Weston · arthur szlam · Sainbayar Sukhbaatar -
2022 : Fifteen-minute Competition Overview Video »
Maartje Anne ter Hoeve · Mikhail Burtsev · Zoya Volovikova · Ziming Li · Yuxuan Sun · Shrestha Mohanty · Negar Arabzadeh · Mohammad Aliannejadi · Milagro Teruel · Marc-Alexandre Côté · Kavya Srinet · arthur szlam · Artem Zholus · Alexey Skrynnik · Aleksandr Panov · Ahmed Awadallah · Julia Kiseleva -
2022 Competition: IGLU: Interactive Grounded Language Understanding in a Collaborative Environment »
Julia Kiseleva · Alexey Skrynnik · Artem Zholus · Shrestha Mohanty · Negar Arabzadeh · Marc-Alexandre Côté · Mohammad Aliannejadi · Milagro Teruel · Ziming Li · Mikhail Burtsev · Maartje Anne ter Hoeve · Zoya Volovikova · Aleksandr Panov · Yuxuan Sun · arthur szlam · Ahmed Awadallah · Kavya Srinet -
2022 : Invited Keynote by Jason Weston »
Jason Weston -
2022 : Learning to Reason and Memorize with Self-Questioning »
Jack Lanchantin · Shubham Toshniwal · Jason E Weston · arthur szlam · Sainbayar Sukhbaatar -
2021 Poster: Hash Layers For Large Sparse Models »
Stephen Roller · Sainbayar Sukhbaatar · arthur szlam · Jason Weston -
2021 : IGLU: Interactive Grounded Language Understanding in a Collaborative Environment + Q&A »
Julia Kiseleva · Ziming Li · Mohammad Aliannejadi · Maartje Anne ter Hoeve · Mikhail Burtsev · Alexey Skrynnik · Artem Zholus · Aleksandr Panov · Katja Hofmann · Kavya Srinet · arthur szlam · Michel Galley · Ahmed Awadallah -
2020 Workshop: Wordplay: When Language Meets Games »
Prithviraj Ammanabrolu · Matthew Hausknecht · Xingdi Yuan · Marc-Alexandre Côté · Adam Trischler · Kory Mathewson @korymath · John Urbanek · Jason Weston · Mark Riedl -
2018 : Humans and models as embodied dialogue agents in text-based games »
Jason Weston -
2018 : The Conversational Intelligence Challenge 2 (ConvAI2) : Setup, Opening Words »
Jason Weston -
2017 Tutorial: Geometric Deep Learning on Graphs and Manifolds »
Michael Bronstein · Joan Bruna · arthur szlam · Xavier Bresson · Yann LeCun -
2016 Workshop: Intuitive Physics »
Adam Lerer · Jiajun Wu · Josh Tenenbaum · Emmanuel Dupoux · Rob Fergus -
2016 Poster: The Product Cut »
Thomas Laurent · James von Brecht · Xavier Bresson · arthur szlam -
2016 Poster: Learning Multiagent Communication with Backpropagation »
Sainbayar Sukhbaatar · arthur szlam · Rob Fergus -
2015 Poster: End-To-End Memory Networks »
Sainbayar Sukhbaatar · arthur szlam · Jason Weston · Rob Fergus -
2015 Poster: Deep Generative Image Models using a Laplacian Pyramid of Adversarial Networks »
Emily Denton · Soumith Chintala · arthur szlam · Rob Fergus