Timezone: »
Large pre-trained language models have been shown to store factual knowledge in their parameters, and achieve state-of-the-art results when fine-tuned on downstream NLP tasks. However, their ability to access and precisely manipulate knowledge is still limited, and hence on knowledge-intensive tasks, their performance lags behind task-specific architectures. Additionally, providing provenance for their decisions and updating their world knowledge remain open research problems. Pre-trained models with a differentiable access mechanism to explicit non-parametric memory can overcome this issue, but have so far been only investigated for extractive downstream tasks. We explore a general-purpose fine-tuning recipe for retrieval-augmented generation (RAG) -- models which combine pre-trained parametric and non-parametric memory for language generation. We introduce RAG models where the parametric memory is a pre-trained seq2seq model and the non-parametric memory is a dense vector index of Wikipedia, accessed with a pre-trained neural retriever. We compare two RAG formulations, one which conditions on the same retrieved passages across the whole generated sequence, the other can use different passages per token. We fine-tune and evaluate our models on a wide range of knowledge-intensive NLP tasks and set the state-of-the-art on three open domain QA tasks, outperforming parametric seq2seq models and task-specific retrieve-and-extract architectures. For language generation tasks, we find that RAG models generate more specific, diverse and factual language than a state-of-the-art parametric-only seq2seq baseline.
Author Information
Patrick Lewis (Facebook AI Research, University College London)
Ethan Perez (New York University)
Aleksandra Piktus (Facebook AI)
Fabio Petroni (Facebook AI Research)
Vladimir Karpukhin (Facebook AI Research)
Naman Goyal (Facebook Inc)
Heinrich Küttler (Facebook AI Research)
Mike Lewis (Facebook AI Research)
Scott Yih (Facebook AI Research)
Tim Rocktäschel (Facebook AI Research)
Sebastian Riedel
Douwe Kiela (Facebook AI Research)
More from the Same Authors
-
2020 Workshop: HAMLETS: Human And Model in the Loop Evaluation and Training Strategies »
Divyansh Kaushik · Bhargavi Paranjape · Forough Arabshahi · Yanai Elazar · Yixin Nie · Max Bartolo · Polina Kirichenko · Pontus Lars Erik Saito Stenetorp · Mohit Bansal · Zachary Lipton · Douwe Kiela -
2020 Poster: The NetHack Learning Environment »
Heinrich Küttler · Nantas Nardelli · Alexander Miller · Roberta Raileanu · Marco Selvatici · Edward Grefenstette · Tim Rocktäschel -
2020 Poster: The Hateful Memes Challenge: Detecting Hate Speech in Multimodal Memes »
Douwe Kiela · Hamed Firooz · Aravind Mohan · Vedanuj Goswami · Amanpreet Singh · Pratik Ringshia · Davide Testuggine -
2020 Poster: Learning Optimal Representations with the Decodable Information Bottleneck »
Yann Dubois · Douwe Kiela · David Schwab · Ramakrishna Vedantam -
2020 Spotlight: Learning Optimal Representations with the Decodable Information Bottleneck »
Yann Dubois · Douwe Kiela · David Schwab · Ramakrishna Vedantam -
2020 Poster: Pre-training via Paraphrasing »
Mike Lewis · Marjan Ghazvininejad · Gargi Ghosh · Armen Aghajanyan · Sida Wang · Luke Zettlemoyer -
2019 Workshop: Emergent Communication: Towards Natural Language »
Abhinav Gupta · Michael Noukhovitch · Cinjon Resnick · Natasha Jaques · Angelos Filos · Marie Ossenkopf · Angeliki Lazaridou · Jakob Foerster · Ryan Lowe · Douwe Kiela · Kyunghyun Cho -
2019 Poster: Hyperbolic Graph Neural Networks »
Qi Liu · Maximilian Nickel · Douwe Kiela -
2019 Poster: Hierarchical Decision Making by Generating and Following Natural Language Instructions »
Hengyuan Hu · Denis Yarats · Qucheng Gong · Yuandong Tian · Mike Lewis -
2018 Workshop: Emergent Communication Workshop »
Jakob Foerster · Angeliki Lazaridou · Ryan Lowe · Igor Mordatch · Douwe Kiela · Kyunghyun Cho -
2017 Workshop: Emergent Communication Workshop »
Jakob Foerster · Igor Mordatch · Angeliki Lazaridou · Kyunghyun Cho · Douwe Kiela · Pieter Abbeel -
2017 Poster: Poincaré Embeddings for Learning Hierarchical Representations »
Maximillian Nickel · Douwe Kiela -
2017 Spotlight: Poincaré Embeddings for Learning Hierarchical Representations »
Maximillian Nickel · Douwe Kiela