Policy Optimization with Linear Temporal Logic Constraints

Cameron Voloshin · Hoang Le · Swarat Chaudhuri · Yisong Yue

Hall J #719

Keywords: [ learning ] [ linear temporal logic ] [ Policy ] [ Optimization ] [ Reinforcement Learning ] [ LTL ] [ Constrained ] [ RL ]

[ Abstract ]
[ Paper [ OpenReview
Tue 29 Nov 9 a.m. PST — 11 a.m. PST


We study the problem of policy optimization (PO) with linear temporal logic (LTL) constraints. The language of LTL allows flexible description of tasks that may be unnatural to encode as a scalar cost function. We consider LTL-constrained PO as a systematic framework, decoupling task specification from policy selection, and an alternative to the standard of cost shaping. With access to a generative model, we develop a model-based approach that enjoys a sample complexity analysis for guaranteeing both task satisfaction and cost optimality (through a reduction to a reachability problem). Empirically, our algorithm can achieve strong performance even in low sample regimes.

Chat is not available.