Skip to yearly menu bar Skip to main content


Poster

Distributed Inverse Constrained Reinforcement Learning for Multi-agent Systems

Shicheng Liu · Minghui Zhu

Hall J (level 1) #730

Keywords: [ inverse reinforcement learning ] [ distributed bi-level optimization ]


Abstract:

This paper considers the problem of recovering the policies of multiple interacting experts by estimating their reward functions and constraints where the demonstration data of the experts is distributed to a group of learners. We formulate this problem as a distributed bi-level optimization problem and propose a novel bi-level ``distributed inverse constrained reinforcement learning" (D-ICRL) algorithm that allows the learners to collaboratively estimate the constraints in the outer loop and learn the corresponding policies and reward functions in the inner loop from the distributed demonstrations through intermittent communications. We formally guarantee that the distributed learners asymptotically achieve consensus which belongs to the set of stationary points of the bi-level optimization problem.

Chat is not available.