`

Timezone: »

 
Model based multi-agent reinforcement learning with tensor decompositions
Pascal van der Vaart · Anuj Mahajan · Shimon Whiteson

A challenge in multi-agent reinforcement learning is to be able to generalize over intractable state-action spaces. This work achieves generalisation in state-action space over unexplored state-action pairs by modelling the transition and reward functions as tensors of low CP-rank. Initial experiments show that using tensor decompositions in a model-based reinforcement learning algorithm can lead to much faster convergence if the true transition and reward functions are indeed of low rank.

Author Information

Pascal van der Vaart (TU Delft)
Anuj Mahajan (University of Oxford)

Anuj is doing a PhD in machine learning at the University of Oxford. His research focuses on using deep learning, probabilistic inference and optimisation methods for single and multi-agent reinforcement learning. Anuj has done his undergrad in Computer Science from the Indian Institute of Technology, Delhi. His PhD is funded by the Google DeepMind Scholarship and Drapers Scholarship.

Shimon Whiteson (University of Oxford)

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors