Skip to yearly menu bar Skip to main content


Invited talk
in
Workshop: Information-Theoretic Principles in Cognitive Systems

Information-based exploration under active inference

Noor Sajid


Abstract:

We contend with conflicting objectives when interacting with their environment e.g., exploratory drives when the environment is unknown or exploitative to maximise some expected return. A widely studied proposition for understanding how to appropriately balance between these distinct imperatives is active inference. In this talk, I will introduce active inference – a neuroscience theory – which brings together perception and action under a single objective of minimising surprisal across time. Through T-maze simulations, I will illustrate how this single objective provides a way to balance information-based exploration and exploitation. Next, I will present our work on scaling up active inference to operate in complex, continuous state-spaces. For this, we propose using multiple forms of Monte-Carlo (MC) sampling to render (expected) surprisal computationally tractable. I will construct-validate this in a complex Animal-AI environment, where our agents can simulate the future, to evince reward-directed navigation – despite a temporary suspension of visual input. Lastly, I will extend this formulation to appropriately deal with volatile environments by introducing a preference-augmented (expected) surprisal objective. Using the FrozenLake environment, I will discuss different ways of encoding preferences and how they underwrite appropriate levels of arbitration between exploitation and exploration.

Chat is not available.