Timezone: »
We systematically investigate graph transformations that enable standard message passing to simulate state-of-the-art graph neural networks (GNNs) without loss of expressivity. Using these, many state-of-the-art GNNs can be implemented with message passing operations from standard libraries, eliminating many sources of implementation issues and allowing for better code optimization. We distinguish between weak and strong simulation: weak simulation achieves the same expressivity only after several message passing steps while strong simulation achieves this after every message passing step. Our contribution leads to a direct way to translate common operations of non-standard GNNs to graph transformations that allow for strong or weak simulation. Our empirical evaluation shows competitive predictive performance of message passing on transformed graphs for various molecular benchmark datasets, in several cases surpassing the original GNNs.
Author Information
Fabian Jogl (TU Wien)
Maximilian Thiessen (TU Wien)
Thomas Gärtner (TU Wien)
More from the Same Authors
-
2022 : Generalized Laplacian Positional Encoding for Graph Representation Learning »
Sohir Maskey · Ali Parviz · Maximilian Thiessen · Hannes Stärk · Ylli Sadikaj · Haggai Maron -
2022 : Expectation Complete Graph Representations using Graph Homomorphisms »
Maximilian Thiessen · Pascal Welke · Thomas Gärtner -
2023 : PAN: Expressiveness of GNNs with Paths »
Caterina Graziani · Tamara Drucks · Monica Bianchini · Franco Scarselli · Thomas Gärtner -
2023 : Maximally Expressive GNNs for Outerplanar Graphs »
Franka Bause · Fabian Jogl · Patrick Indri · Tamara Drucks · David Penz · Nils M. Kriege · Thomas Gärtner · Pascal Welke · Maximilian Thiessen -
2022 Poster: Active Learning of Classifiers with Label and Seed Queries »
Marco Bressan · Nicolò Cesa-Bianchi · Silvio Lattanzi · Andrea Paudice · Maximilian Thiessen -
2021 Poster: Active Learning of Convex Halfspaces on Graphs »
Maximilian Thiessen · Thomas Gaertner