Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Causal Representation Learning

Expediting Reinforcement Learning by Incorporating Temporal Causal Information

Jan Corazza · Daniel Neider · Zhe Xu · Hadi Partovi Aria

Keywords: [ probabilistic reward machines ] [ formal methods ] [ Reinforcement Learning ] [ temporal causality ]


Abstract:

Reinforcement learning (RL) algorithms struggle with learning optimal policies for tasks where reward feedback is sparse and depends on a complex sequence of events in the environment. Probabilistic reward machines (PRMs) are finite-state formalisms that can capture temporal dependencies in the reward signal, along with nondeterministic task outcomes. While special RL algorithms can exploit this finite-state structure to expedite learning, PRMs remain difficult to modify and design by hand. This hinders the already difficult tasks of utilizing high-level causal knowledge about the environment, and transferring the reward formalism into a new domain with a different causal structure. This paper proposes a novel method to incorporate causal information in the form of Temporal Logic-based Causal Diagrams into the reward formalism, thereby expediting policy learning and aiding the transfer of task specifications to new environments.

Chat is not available.