Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Generalization in Planning (GenPlan '23)

Quantized Local Independence Discovery for Fine-Grained Causal Dynamics Learning in Reinforcement Learning

Inwoo Hwang · Yun-hyeok Kwak · Suhyung Choi · Byoung-Tak Zhang · Sanghack Lee

Keywords: [ Reinforcement Learning ] [ causal reasoning ] [ Local Independence ]


Abstract:

Incorporating causal relationships between the variables into dynamics learning has emerged as a promising approach to enhance robustness and generalization in reinforcement learning (RL). Recent studies have focused on examining conditional independences and leveraging only relevant state and action variables for prediction. However, such approaches tend to overlook local independence relationships that hold under certain circumstances referred as event. In this work, we present a theoretically-grounded and practical approach to dynamics learning which discovers such meaningful events and infers fine-grained causal relationships. The key idea is to learn a discrete latent variable that represents the pair of event and causal relationships specific to the event via vector quantization. As a result, our method provides a fine-grained understanding of the dynamics by capturing event-specific causal relationships, leading to improved robustness and generalization in RL. Experimental results demonstrate that our method is more robust to unseen states and generalizes well to downstream tasks compared to prior approaches. In addition, we find that our method successfully identifies meaningful events and recovers event-specific causal relationships.

Chat is not available.