Skip to yearly menu bar Skip to main content


Poster

Active Exploration for Learning Symbolic Representations

Garrett Andersen · George Konidaris

Pacific Ballroom #7

Keywords: [ Reinforcement Learning ] [ Active Learning ]


Abstract:

We introduce an online active exploration algorithm for data-efficiently learning an abstract symbolic model of an environment. Our algorithm is divided into two parts: the first part quickly generates an intermediate Bayesian symbolic model from the data that the agent has collected so far, which the agent can then use along with the second part to guide its future exploration towards regions of the state space that the model is uncertain about. We show that our algorithm outperforms random and greedy exploration policies on two different computer game domains. The first domain is an Asteroids-inspired game with complex dynamics but basic logical structure. The second is the Treasure Game, with simpler dynamics but more complex logical structure.

Live content is unavailable. Log in and register to view live content