Timezone: »

 
Poster
Pre-training via Paraphrasing
Mike Lewis · Marjan Ghazvininejad · Gargi Ghosh · Armen Aghajanyan · Sida Wang · Luke Zettlemoyer

Mon Dec 07 09:00 PM -- 11:00 PM (PST) @ Poster Session 0 #62

We introduce MARGE, a pre-trained sequence-to-sequence model learned with an unsupervised multi-lingual multi-document paraphrasing objective. MARGE provides an alternative to the dominant masked language modeling paradigm, where we self-supervise the \emph{reconstruction} of target text by \emph{retrieving} a set of related texts (in many languages) and conditioning on them to maximize the likelihood of generating the original. We show it is possible to jointly learn to do retrieval and reconstruction, given only a random initialization. The objective noisily captures aspects of paraphrase, translation, multi-document summarization, and information retrieval, allowing for strong zero-shot performance on several tasks. For example, with no additional task-specific training we achieve BLEU scores of up to 35.8 for document translation. We further show that fine-tuning gives strong performance on a range of discriminative and generative tasks in many languages, making MARGE the most generally applicable pre-training method to date.

Author Information

Mike Lewis (Facebook AI Research)
Marjan Ghazvininejad (Facebook AI Research)
Gargi Ghosh (Facebook)
Armen Aghajanyan (Facebook)
Sida Wang (Facebook AI Research)
Luke Zettlemoyer (University of Washington and Facebook)

More from the Same Authors