Timezone: »

User-Interactive Offline Reinforcement Learning
Phillip Swazinna · Steffen Udluft · Thomas Runkler
Event URL: https://openreview.net/forum?id=Jx1ziIYcwo »

Offline reinforcement learning algorithms are still not fully trusted by practitioners due to the risk that the learned policy performs worse than the original policy that generated the dataset or behaves in an unexpected way that is unfamiliar to the user. At the same time, offline RL algorithms are not able to tune their arguably most important hyperparameter - the proximity of the learned policy to the original policy. We propose an algorithm that allows the user to tune this hyperparameter at runtime, thereby addressing both of the above mentioned issues simultaneously.

Author Information

Phillip Swazinna (TU Munich & Siemens AG)
Steffen Udluft (Siemens AG)
Thomas Runkler (Technical University of Munich)

More from the Same Authors