Timezone: »
An effective aggregation of node features into a graph-level representation via readout functions is an essential step in numerous learning tasks involving graph neural networks. Typically, readouts are simple and non-adaptive functions designed such that the resulting hypothesis space is permutation invariant. Prior work on deep sets indicates that such readouts might require complex node embeddings that can be difficult to learn via standard neighborhood aggregation schemes. Motivated by this, we investigate the potential of adaptive readouts given by neural networks that do not necessarily give rise to permutation invariant hypothesis spaces. We argue that in some problems such as binding affinity prediction where molecules are typically presented in a canonical form it might be possible to relax the constraints on permutation invariance of the hypothesis space and learn a more effective model of the affinity by employing an adaptive readout function. Our empirical results demonstrate the effectiveness of neural readouts on more than 40 datasets spanning different domains and graph characteristics. Moreover, we observe a consistent improvement over standard readouts (i.e., sum, max, and mean) relative to the number of neighborhood aggregation iterations and different convolutional operators.
Author Information
David Buterez (University of Cambridge)
Jon Paul Janet (AstraZeneca)
Steven J Kiddle (AstraZeneca)
Director, Health Data Science in the Healthcare Analytics team, in the Data Science & Advanced Analytics department. Lead the real world insights/evidence activities for respiratory & immunology therapy area in R&D Biopharmaceuticals AstraZeneca.
Dino Oglic (AstraZeneca)
Pietro Liò (University of Cambridge)
Related Events (a corresponding poster, oral, or spotlight)
-
2022 Poster: Graph Neural Networks with Adaptive Readouts »
Tue. Nov 29th through Wed the 30th Room Hall J #321
More from the Same Authors
-
2022 : On the Expressive Power of Geometric Graph Neural Networks »
Cristian Bodnar · Chaitanya K. Joshi · Simon Mathis · Taco Cohen · Pietro Liò -
2022 Spotlight: Lightning Talks 2A-3 »
David Buterez · Chengan He · Xuan Kan · Yutong Lin · Konstantin Schürholt · Yu Yang · Louis Annabi · Wei Dai · Xiaotian Cheng · Alexandre Pitti · Ze Liu · Jon Paul Janet · Jun Saito · Boris Knyazev · Mathias Quoy · Zheng Zhang · James Zachary · Steven J Kiddle · Xavier Giro-i-Nieto · Chang Liu · Hejie Cui · Zilong Zhang · Hakan Bilen · Damian Borth · Dino Oglic · Holly Rushmeier · Han Hu · Xiangyang Ji · Yi Zhou · Nanning Zheng · Ying Guo · Pietro Liò · Stephen Lin · Carl Yang · Yue Cao -
2022 : On the Expressive Power of Geometric Graph Neural Networks »
Cristian Bodnar · Chaitanya K. Joshi · Simon Mathis · Taco Cohen · Pietro Liò -
2022 : Achievements and Challenges Part 1/2 »
Dimitris Vlitas · Dino Oglic -
2022 Workshop: Synthetic Data for Empowering ML Research »
Mihaela van der Schaar · Zhaozhi Qian · Sergul Aydore · Dimitris Vlitas · Dino Oglic · Tucker Balch -
2021 : Learning Graph Search Heuristics »
Michal Pándy · Rex Ying · Gabriele Corso · Petar Veličković · Jure Leskovec · Pietro Liò -
2021 Poster: Neural Distance Embeddings for Biological Sequences »
Gabriele Corso · Zhitao Ying · Michal Pándy · Petar Veličković · Jure Leskovec · Pietro Liò -
2021 Poster: Weisfeiler and Lehman Go Cellular: CW Networks »
Cristian Bodnar · Fabrizio Frasca · Nina Otter · Yuguang Wang · Pietro Liò · Guido Montufar · Michael Bronstein -
2020 : Contributed Talk 4: Directional Graph Networks »
Dominique Beaini · Saro Passaro · Vincent Létourneau · Will Hamilton · Gabriele Corso · Pietro Liò -
2020 Poster: Principal Neighbourhood Aggregation for Graph Nets »
Gabriele Corso · Luca Cavalleri · Dominique Beaini · Pietro Liò · Petar Veličković -
2016 Poster: Greedy Feature Construction »
Dino Oglic · Thomas Gärtner