Timezone: »
We present discriminative Gaifman models, a novel family of relational machine learning models. Gaifman models learn feature representations bottom up from representations of locally connected and bounded-size regions of knowledge bases (KBs). Considering local and bounded-size neighborhoods of knowledge bases renders logical inference and learning tractable, mitigates the problem of overfitting, and facilitates weight sharing. Gaifman models sample neighborhoods of knowledge bases so as to make the learned relational models more robust to missing objects and relations which is a common situation in open-world KBs. We present the core ideas of Gaifman models and apply them to large-scale relational learning problems. We also discuss the ways in which Gaifman models relate to some existing relational machine learning approaches.
Author Information
Mathias Niepert (University of Stuttgart / NEC Labs Europe)
More from the Same Authors
-
2021 Poster: Efficient Learning of Discrete-Continuous Computation Graphs »
David Friede · Mathias Niepert -
2021 Poster: Implicit MLE: Backpropagating Through Discrete Exponential Family Distributions »
Mathias Niepert · Pasquale Minervini · Luca Franceschi -
2017 Poster: Learning Graph Representations with Embedding Propagation »
Alberto Garcia Duran · Mathias Niepert