Timezone: »
Learning a new task often requires exploration: gathering data to learn about the environment and how to solve the task. But how do we efficiently explore, and how can an agent make the best use of prior knowledge it has about the environment? Meta-reinforcement learning allows us to learn inductive biases for exploration from data, which plays a crucial role in enabling agents to rapidly pick up new tasks. In the first part of this talk, I look at different meta-learning problem settings that exist in the literature, and what type of exploratory behaviour is necessary in these settings. This generally depends on how much time the agent has to interact with the environment, before its performance is evaluated. In the second part of the talk, we take a step back and consider how to meta-learn exploration strategies in the first place, which might require a different type of exploration during meta-learning. Throughout the talk, I will focus on the "online adaptation" setting where the agent has to perform well from the very first time step in a new environment. In these settings the agent has to very carefully trade off exploration and exploitation, since each action counts towards its final performance.
Author Information
Luisa Zintgraf (Vrije Universiteit Brussel (VUB))
More from the Same Authors
-
2021 : On the Practical Consistency of Meta-Reinforcement Learning Algorithms »
Zheng Xiong · Luisa Zintgraf · Jacob Beck · Risto Vuorio · Shimon Whiteson -
2021 : Generalized Belief Learning in Multi-Agent Settings »
Darius Muglich · Luisa Zintgraf · Christian Schroeder de Witt · Shimon Whiteson · Jakob Foerster -
2021 : Invited Talk #6: Luisa Zintgraf »
Luisa Zintgraf -
2020 : Q/A for invited talk #2 »
Luisa Zintgraf