Skip to yearly menu bar Skip to main content


Meet in the Middle: A New Pre-training Paradigm

Anh Nguyen · Nikos Karampatziakis · Weizhu Chen

Great Hall & Hall B1+B2 (level 1) #439
[ ]
Tue 12 Dec 3:15 p.m. PST — 5:15 p.m. PST


Most language models (LMs) are trained and applied in an autoregressive left-to-right fashion, predicting the next token from the preceding ones. However, this ignores that the full sequence is available during training. In this paper, we introduce ``Meet in the Middle'' (MIM) a new pre-training paradigm that improves data efficiency by training in two directions, left-to-right and right-to-left, and encouraging the respective modelsto agree on their token distribution for each position. While the primary outcome is an improved left-to-right LM,we also obtain secondary benefits in the infilling task. There, we leverage the two pre-trained directions to propose an infilling procedure that builds the completion simultaneously from both sides. We conduct extensive experiments on both programming and natural languages and show that MIM significantly surpasses existing pre-training paradigms, in both left-to-right generation as well as infilling.Code and models available at

Chat is not available.