Skip to yearly menu bar Skip to main content


( events)   Timezone:  
Tutorial
Mon Dec 09 08:30 AM -- 10:30 AM (PST) @ West Exhibition Hall C + B3
Imitation Learning and its Application to Natural Language Generation
Kyunghyun Cho · Hal Daumé III

Imitation learning is a learning paradigm that interpolates reinforcement learning on one extreme and supervised learning on the other extreme. In the specific case of generating structured outputs--as in natural language generation--imitation learning allows us to train generation policies with neither strong supervision on the detailed generation procedure (as would be required in supervised learning) nor with only a sparse reward signal (as in reinforcement learning). Imitation learning accomplishes this by exploiting the availability of potentially suboptimal "experts" that provide supervision along an execution trajectory of the policy. In the first part of this tutorial, we overview the paradigm of imitation learning and a suite of practical imitation learning algorithms. We then consider the specific application of natural language generation, framing this problem as a sequential decision making process. Under this view, we demonstrate how imitation learning could be successfully applied to natural language generation and open the door to a range of possible ways to learn policies that generate natural language sentences beyond naive left-to-right autoregressive generation.