Skip to yearly menu bar Skip to main content


Poster

Regret Bounds for Risk-Sensitive Reinforcement Learning

Osbert Bastani · Jason Yecheng Ma · Estelle Shen · Wanqiao Xu

Hall J (level 1) #720

Keywords: [ CVaR objective ] [ Risk-sensitive reinforcement learning ]


Abstract:

In safety-critical applications of reinforcement learning such as healthcare and robotics, it is often desirable to optimize risk-sensitive objectives that account for tail outcomes rather than expected reward. We prove the first regret bounds for reinforcement learning under a general class of risk-sensitive objectives including the popular CVaR objective. Our theory is based on a novel characterization of the CVaR objective as well as a novel optimistic MDP construction.

Chat is not available.