Skip to yearly menu bar Skip to main content


Contributed Talk - Lightning
in
Workshop: Symmetry and Geometry in Neural Representations (NeurReps)

Homomorphism AutoEncoder --- Learning Group Structured Representations from Observed Transitions

Hamza Keurti · Hsiao-Ru Pan · Michel Besserve · Benjamin F. Grewe · Bernhard Schölkopf

Keywords: [ Homomorphism ] [ group representation ] [ Equivariant representation ] [ Representation Learning ] [ Autoencoder ] [ Disentanglement ] [ Unsupervised Learning ] [ Interaction based Learning ] [ Sensorimotor ]


Abstract:

It is crucial for agents, both biological and artificial, to acquire world models that veridically represent the external world and how it is modified by the agent's own actions. We consider the case where such modifications can be modelled as transformations from a group of symmetries structuring the world state space. We use tools from representation learning and group theory to learn latent representations that account for both sensory information and the actions that alters it during interactions. We introduce the Homomorphism AutoEncoder (HAE), an autoencoder equipped with a learned group representation linearly acting on its latent space trained on 2-step transitions to implicitly enforce the group homomorphism property on the action representation.Compared to existing work, our approach makes fewer assumptions on the group representation and on which transformations the agent can sample from. We motivate our method theoretically, and demonstrate empirically that it can learn the correct representation of the groups and the topology of the environment. We also compare its performance in trajectory prediction with previous methods.

Chat is not available.