Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Deep Reinforcement Learning Workshop

Ensemble based uncertainty estimation with overlapping alternative predictions

Dirk Eilers · Felippe Schmoeller Roza · Karsten Roscher


Abstract:

A reinforcement learning model will predict an action in whatever state it is - even if there is no distinct outcome due to unseen states the model may not indicate that. Different methods for uncertainty estimation can be used to indicate this. Although, uncertainty estimation is a well understood approach in AI, the overlap of effects like alternative possible predictions (multiple feasible actions in a given state) in reinforcement learning is not so clear and to our knowledge, not so well documented in current literature. In this work we investigate uncertainty estimation on simplified scenarios in a gridworld environment. Using model ensemble based uncertainty estimation we propose an algorithm based on action count variance to deal with discrete action spaces and delta to ID action variance calculation to handle overlapping alternative predictions. To visualize the expressiveness we create heatmaps for different ID and ODD scenarios on gridworlds and propose an indicator for uncertainty. We can show, that the method will indicate potentially unsafe states when the agent is near unseen elements in the scenarios (OOD) and can distinguish between OOD and overlapping alternative predictions.

Chat is not available.