Timezone: »
Natural language understanding in grounded interactive scenarios is tightly coupled with the actions the system generates. The action space used determines much of the complexity of the problem and the type of reasoning required. In this talk, I will describe our approach to learning to map instructions and observations to continuous control of a realistic quadcopter drone. This scenario raises new challenging questions including how can we use demonstrations to learn to bridge the gap between the high-level concepts of language and low-level robot controls? And how do we design models that continuously observe, control, and react to a rapidly changing environment? This work uses a new publicly available evaluation benchmark.
Author Information
Yoav Artzi (Cornell University)
More from the Same Authors
-
2022 : $\ell$Gym: Natural Language Visual Reasoning with Reinforcement Learning »
Anne Wu · Kianté Brantley · Noriyuki Kojima · Yoav Artzi -
2022 Workshop: InterNLP: Workshop on Interactive Learning for Natural Language Processing »
Kianté Brantley · Soham Dan · Ji Ung Lee · Khanh Nguyen · Edwin Simpson · Alane Suhr · Yoav Artzi