Timezone: »
The limited priors required by neural networks make them the dominating choice to encode and learn policies using reinforcement learning (RL). However, they are also black-boxes, making it hard to understand the agent's behavior, especially when working on the image level. Therefore, neuro-symbolic RL aims at creating policies that are interpretable in the first place.Unfortunately, interpretability is not explainability. To achieve both, we introduce Neurally gUided Differentiable loGic policiEs (NUDGE). NUDGE exploits trained neural network-based agents to guide the search of candidate-weighted logic rules, then uses differentiable logic to train the logic agents. Our experimental evaluation demonstrates that NUDGE agents can induce interpretable and explainable policies while outperforming purely neural ones and showing good flexibility to environments of different initial states and problem sizes.
Author Information
Quentin Delfosse (CS Department, Technical University of Darmstadt, Germany)

PhD student at AIML (TU Darmstadt). My research is focuses on creating interpretable RL agents, using object extraction methods, logic and human-understandable concepts.
Hikaru Shindo (TU Darmstadt)
Devendra Dhami (Eindhoven University of Technology)
Kristian Kersting (TU Darmstadt)
More from the Same Authors
-
2022 : Mixture of Gaussian Processes with Probabilistic Circuits for Multi-Output Regression »
Mingye Zhu · Zhongjie Yu · Martin Trapp · Arseny Skryagin · Kristian Kersting -
2023 : LEDITS++: Limitless Image Editing using Text-to-Image Models »
Manuel Brack · Linoy Tsban · Katharina Kornmeier · Apolinário Passos · Felix Friedrich · Patrick Schramowski · Kristian Kersting -
2023 : LEDITS++: Limitless Image Editing using Text-to-Image Models »
Manuel Brack · Linoy Tsban · Katharina Kornmeier · Apolinário Passos · Felix Friedrich · Patrick Schramowski · Kristian Kersting -
2023 : Leveraging Diffusion-Based Image Variations for Robust Training on Poisoned Data »
Lukas Struppek · Martin Bernhard Hentschel · Clifton Poth · Dominik Hintersdorf · Kristian Kersting -
2023 : Defending Our Privacy With Backdoors »
Dominik Hintersdorf · Lukas Struppek · Daniel Neider · Kristian Kersting -
2023 Poster: Do Not Marginalize Mechanisms, Rather Consolidate! »
Moritz Willig · Matej Zečević · Devendra Dhami · Kristian Kersting -
2023 Poster: ATMAN: Understanding Transformer Predictions Through Memory Efficient Attention Manipulation »
Björn Deiseroth · Mayukh Deb · Samuel Weinbach · Manuel Brack · Patrick Schramowski · Kristian Kersting -
2023 Poster: SEGA: Instructing Text-to-Image Models using Semantic Guidance »
Manuel Brack · Felix Friedrich · Dominik Hintersdorf · Lukas Struppek · Patrick Schramowski · Kristian Kersting -
2023 Poster: MultiFusion: Fusing Pre-Trained Models for Multi-Lingual, Multi-Modal Image Generation »
Marco Bellagente · Manuel Brack · Hannah Teufel · Felix Friedrich · Björn Deiseroth · Constantin Eichenberg · Andrew Dai · Robert Baldock · Souradeep Nanda · Koen Oostermeijer · Andres Felipe Cruz-Salinas · Patrick Schramowski · Kristian Kersting · Samuel Weinbach -
2023 Poster: Characteristic Circuits »
Zhongjie Yu · Martin Trapp · Kristian Kersting -
2023 Oral: Characteristic Circuits »
Zhongjie Yu · Martin Trapp · Kristian Kersting -
2022 : Panel Discussion: "Heading for a Unifying View on nCSI" »
Tobias Gerstenberg · Sriraam Natarajan · - Mausam · Guy Van den Broeck · Devendra Dhami -
2022 Workshop: Workshop on neuro Causal and Symbolic AI (nCSI) »
Matej Zečević · Devendra Dhami · Christina Winkler · Thomas Kipf · Robert Peharz · Petar Veličković -
2022 : Panel »
Guy Van den Broeck · Cassio de Campos · Denis Maua · Kristian Kersting · Rianne van den Berg -
2021 Poster: Interventional Sum-Product Networks: Causal Inference with Tractable Probabilistic Models »
Matej Zečević · Devendra Dhami · Athresh Karanam · Sriraam Natarajan · Kristian Kersting