Skip to yearly menu bar Skip to main content

Workshop: Optimal Transport and Machine Learning

Understanding Reward Ambiguity Through Optimal Transport Theory in Inverse Reinforcement Learning

Ali Baheri


In inverse reinforcement learning (IRL), the central objective is to infer underlying reward functions from observed expert behaviors in a way that not only explains the given data but also generalizes to unseen scenarios, ensuring robustness against reward ambiguity—where multiple reward functions can equally explain the same expert behaviors. While significant strides have been made in addressing this issue, current methods often face with high-dimensional problems and lack a geometric foundation. This paper harnesses the optimal transport (OT) theory to provide a fresh perspective on these challenges. By utilizing the Wasserstein distance from OT, we establish a geometric framework that allows for quantifying reward ambiguity and identifying a central representation or centroid of reward functions. These insights pave the way for robust IRL methodologies anchored in geometric interpretations, offering a structured approach to tackle reward ambiguity in high-dimensional settings.

Chat is not available.