Skip to yearly menu bar Skip to main content


Poster

e-COP : Episodic Constrained Optimization of Policies

Akhil Agnihotri · Rahul Jain · Deepak Ramachandran · Sahil Singla

West Ballroom A-D #6309
[ ]
Thu 12 Dec 11 a.m. PST — 2 p.m. PST

Abstract:

In this paper, we present the e-COP algorithm, the first policy optimization algorithm for constrained Reinforcement Learning (RL) in episodic (finite horizon) settings. Such formulations are applicable when there are separate sets of optimization criteria and constraints on a system's behavior. We approach this problem by first establishing a policy difference lemma for the episodic setting, which provides the theoretical foundation for the algorithm. Then, we propose to combine a set of established and novel solution ideas to yield the e-COP algorithm that is easy to implement and numerically stable, and provide a theoretical guarantee on optimality under certain scaling assumptions. Through extensive empirical analysis using benchmarks in the Safety Gym suite, we show that our algorithm has similar or better performance than SoTA (non-episodic) algorithms adapted for the episodic setting. The scalability of the algorithm opens the door to its application in safety-constrained Reinforcement Learning from Human Feedback for Large Language or Diffusion Models.

Live content is unavailable. Log in and register to view live content