`

Timezone: »

 
Poster
Object-Aware Regularization for Addressing Causal Confusion in Imitation Learning
Jongjin Park · Younggyo Seo · Chang Liu · Li Zhao · Tao Qin · Jinwoo Shin · Tie-Yan Liu

Wed Dec 08 12:30 AM -- 02:00 AM (PST) @ None #None

Behavioral cloning has proven to be effective for learning sequential decision-making policies from expert demonstrations. However, behavioral cloning often suffers from the causal confusion problem where a policy relies on the noticeable effect of expert actions due to the strong correlation but not the cause we desire. This paper presents Object-aware REgularizatiOn (OREO), a simple technique that regularizes an imitation policy in an object-aware manner. Our main idea is to encourage a policy to uniformly attend to all semantic objects, in order to prevent the policy from exploiting nuisance variables strongly correlated with expert actions. To this end, we introduce a two-stage approach: (a) we extract semantic objects from images by utilizing discrete codes from a vector-quantized variational autoencoder, and (b) we randomly drop the units that share the same discrete code together, i.e., masking out semantic objects. Our experiments demonstrate that OREO significantly improves the performance of behavioral cloning, outperforming various other regularization and causality-based methods on a variety of Atari environments and a self-driving CARLA environment. We also show that our method even outperforms inverse reinforcement learning methods trained with a considerable amount of environment interaction.

Author Information

Jongjin Park (KAIST)
Younggyo Seo (KAIST)
Chang Liu (Microsoft Research Asia)
Li Zhao (Microsoft Research)
Tao Qin (Microsoft Research)
Jinwoo Shin (KAIST)
Tie-Yan Liu (Microsoft Research)

More from the Same Authors