Skip to yearly menu bar Skip to main content


Poster

RA-PbRL: Provably Efficient Risk-Aware Preference-Based Reinforcement Learning

Yujie Zhao · Jose Aguilar Escamilla · Weyl Lu · Huazheng Wang

West Ballroom A-D #6605
[ ]
Wed 11 Dec 11 a.m. PST — 2 p.m. PST

Abstract:

Preference-based Reinforcement Learning ( PbRL ) studies the problem where agents receive only preferences over pairs of trajectories in each episode. Traditional approaches in this field have predominantly focused on the mean reward or utility criterion. However, in PbRL scenarios demanding heightened risk awareness, such as in AI systems, healthcare, and agriculture, risk-aware measures are requisite. Traditional risk-aware objectives and algorithms are not applicable in such one-episode-reward settings. To address this, we explore and prove the applicability of two risk-aware objectives to PbRL: nested and static quantile risk objectives. We also introduce Risk-Aware- PbRL (RA-PbRL), an algorithm designed to optimize both nested and static objectives. Additionally, we provide a theoretical analysis of the regret upper bounds, demonstrating that they are sublinear with respect to the number of episodes, and present empirical results to support our findings. Our code is available in https://github.com/aguilarjose11/PbRLNeurips.

Live content is unavailable. Log in and register to view live content