Skip to yearly menu bar Skip to main content


Poster

Symmetry-Based Disentangled Representation Learning requires Interaction with Environments

Hugo Caselles-Dupré · Michael Garcia Ortiz · David Filliat

East Exhibition Hall B + C #64

Keywords: [ Reinforcement Learning ] [ Deep Learning -> Predictive Models; Reinforcement Learning and Planning ] [ Algorithms ] [ Representation Learning ]


Abstract:

Finding a generally accepted formal definition of a disentangled representation in the context of an agent behaving in an environment is an important challenge towards the construction of data-efficient autonomous agents. Higgins et al. recently proposed Symmetry-Based Disentangled Representation Learning, a definition based on a characterization of symmetries in the environment using group theory. We build on their work and make observations, theoretical and empirical, that lead us to argue that Symmetry-Based Disentangled Representation Learning cannot only be based on static observations: agents should interact with the environment to discover its symmetries. Our experiments can be reproduced in Colab and the code is available on GitHub.

Live content is unavailable. Log in and register to view live content