Skip to yearly menu bar Skip to main content


Poster

Graph Neural Networks with Adaptive Readouts

David Buterez · Jon Paul Janet · Steven J Kiddle · Dino Oglic · Pietro LiĆ²

Hall J (level 1) #321

Keywords: [ neural ] [ Invariance ] [ invariant ] [ readout ] [ deep sets ] [ Permutation ] [ GNN ] [ aggregator ] [ Graph Representation Learning ] [ Adaptive ] [ differentiable ] [ graph neural networks ]


Abstract:

An effective aggregation of node features into a graph-level representation via readout functions is an essential step in numerous learning tasks involving graph neural networks. Typically, readouts are simple and non-adaptive functions designed such that the resulting hypothesis space is permutation invariant. Prior work on deep sets indicates that such readouts might require complex node embeddings that can be difficult to learn via standard neighborhood aggregation schemes. Motivated by this, we investigate the potential of adaptive readouts given by neural networks that do not necessarily give rise to permutation invariant hypothesis spaces. We argue that in some problems such as binding affinity prediction where molecules are typically presented in a canonical form it might be possible to relax the constraints on permutation invariance of the hypothesis space and learn a more effective model of the affinity by employing an adaptive readout function. Our empirical results demonstrate the effectiveness of neural readouts on more than 40 datasets spanning different domains and graph characteristics. Moreover, we observe a consistent improvement over standard readouts (i.e., sum, max, and mean) relative to the number of neighborhood aggregation iterations and different convolutional operators.

Chat is not available.