Skip to yearly menu bar Skip to main content


Spotlight Poster

Conditional Mutual Information for Disentangled Representations in Reinforcement Learning

Mhairi Dunion · Trevor McInroe · Kevin Luck · Kevin Sebastian Luck · Josiah Hanna · Stefano Albrecht

Great Hall & Hall B1+B2 (level 1) #1404
[ ] [ Project Page ]
[ Paper [ Poster [ OpenReview
Wed 13 Dec 8:45 a.m. PST — 10:45 a.m. PST

Abstract:

Reinforcement Learning (RL) environments can produce training data with spurious correlations between features due to the amount of training data or its limited feature coverage. This can lead to RL agents encoding these misleading correlations in their latent representation, preventing the agent from generalising if the correlation changes within the environment or when deployed in the real world. Disentangled representations can improve robustness, but existing disentanglement techniques that minimise mutual information between features require independent features, thus they cannot disentangle correlated features. We propose an auxiliary task for RL algorithms that learns a disentangled representation of high-dimensional observations with correlated features by minimising the conditional mutual information between features in the representation. We demonstrate experimentally, using continuous control tasks, that our approach improves generalisation under correlation shifts, as well as improving the training performance of RL algorithms in the presence of correlated features.

Chat is not available.