Skip to yearly menu bar Skip to main content

Workshop: Deep Reinforcement Learning Workshop

Deconfounded Imitation Learning

Risto Vuorio · Pim de Haan · Johann Brehmer · Hanno Ackermann · Daniel Dijkman · Taco Cohen


Standard imitation learning can fail when the expert demonstrators have different sensory inputs than the imitating agent. This partial observability gives rise to hidden confounders in the causal graph, which lead to the failure to imitate. We break down the space of confounded imitation learning problems and identify three settings with different data requirements in which the correct imitation policy can be identified. We then introduce an algorithm for deconfounded imitation learning, which trains an inference model jointly with a latent-conditional policy. At test time, the agent alternates between updating its belief over the latent and acting under the belief. We show in theory and practice that this algorithm converges to the correct interventional policy, solves the confounding issue, and can under certain assumptions achieve an asymptotically optimal imitation performance.

Chat is not available.