Timezone: »

Interpretable and Explainable Logical Policies via Neurally Guided Symbolic Abstraction
Quentin Delfosse · Hikaru Shindo · Devendra Dhami · Kristian Kersting

Wed Dec 13 08:45 AM -- 10:45 AM (PST) @ Great Hall & Hall B1+B2 #1507

The limited priors required by neural networks make them the dominating choice to encode and learn policies using reinforcement learning (RL). However, they are also black-boxes, making it hard to understand the agent's behavior, especially when working on the image level. Therefore, neuro-symbolic RL aims at creating policies that are interpretable in the first place.Unfortunately, interpretability is not explainability. To achieve both, we introduce Neurally gUided Differentiable loGic policiEs (NUDGE). NUDGE exploits trained neural network-based agents to guide the search of candidate-weighted logic rules, then uses differentiable logic to train the logic agents. Our experimental evaluation demonstrates that NUDGE agents can induce interpretable and explainable policies while outperforming purely neural ones and showing good flexibility to environments of different initial states and problem sizes.

Author Information

Quentin Delfosse (CS Department, Technical University of Darmstadt, Germany)
Quentin Delfosse

PhD student at AIML (TU Darmstadt). My research is focuses on creating interpretable RL agents, using object extraction methods, logic and human-understandable concepts.

Hikaru Shindo (TU Darmstadt)
Devendra Dhami (Eindhoven University of Technology)
Kristian Kersting (TU Darmstadt)

More from the Same Authors