Timezone: »
Bayesian Networks may be appealing for clinical decision-making due to their inclusion of causal knowledge, but their practical adoption remains limited as a result of their inability to deal with unstructured data. While neural networks do not have this limitation, they are not interpretable and are inherently unable to deal with causal structure in the input space. Our goal is to build neural networks that combine the advantages of both approaches. Motivated by the perspective to inject causal knowledge while training such neural networks, this work presents initial steps in that direction. We demonstrate how a neural network can be trained to output conditional probabilities, providing approximately the same functionality as a Bayesian Network. Additionally, we propose two training strategies that allow encoding the independence relations inferred from a given causal structure into the neural network. We present initial results in a proof-of-concept setting, showing that the neural model acts as an understudy to its Bayesian Network counterpart, approximating its probabilistic and causal properties.
Author Information
Paloma Rabaey (Ghent University)
Cedric De Boom (Ghent University)
Thomas Demeester (Ghent University)
More from the Same Authors
-
2022 Poster: TempEL: Linking Dynamically Evolving and Newly Emerging Entities »
Klim Zaporojets · Lucie-Aimée Kaffee · Johannes Deleu · Thomas Demeester · Chris Develder · Isabelle Augenstein -
2018 Poster: DeepProbLog: Neural Probabilistic Logic Programming »
Robin Manhaeve · Sebastijan Dumancic · Angelika Kimmig · Thomas Demeester · Luc De Raedt -
2018 Spotlight: DeepProbLog: Neural Probabilistic Logic Programming »
Robin Manhaeve · Sebastijan Dumancic · Angelika Kimmig · Thomas Demeester · Luc De Raedt