Timezone: »
How can we plan efficiently in real time to control an agent in a complex environment that may involve many other agents? While existing sample-based planners have enjoyed empirical success in large POMDPs, their performance heavily relies on a fast simulator. However, real-world scenarios are complex in nature and their simulators are often computationally demanding, which severely limits the performance of online planners. In this work, we propose influence-augmented online planning, a principled method to transform a factored simulator of the entire environment into a local simulator that samples only the state variables that are most relevant to the observation and reward of the planning agent and captures the incoming influence from the rest of the environment using machine learning methods. Our main experimental results show that planning on this less accurate but much faster local simulator with POMCP leads to higher real-time planning performance than planning on the simulator that models the entire environment.
Author Information
Jinke He (Delft University of Technology)
Miguel Suau (Delft University of Technology)
Frans Oliehoek (TU Delft)
More from the Same Authors
-
2021 : Offline Contextual Bandits for Wireless Network Optimization »
Miguel Suau -
2022 Poster: Distributed Influence-Augmented Local Simulators for Parallel MARL in Large Networked Systems »
Miguel Suau · Jinke He · Mustafa Mert Çelikok · Matthijs Spaan · Frans Oliehoek -
2020 Poster: MDP Homomorphic Networks: Group Symmetries in Reinforcement Learning »
Elise van der Pol · Daniel E Worrall · Herke van Hoof · Frans Oliehoek · Max Welling -
2020 Poster: Multi-agent active perception with prediction rewards »
Mikko Lauri · Frans Oliehoek