Timezone: »
In the past few years, graph neural networks (GNNs) have become the de facto model of choice for graph classification. While, from the theoretical viewpoint, most GNNs can operate on graphs of any size, it is empirically observed that their classification performance degrades when they are applied on graphs with sizes that differ from those in the training data. Previous works have tried to tackle this issue in graph classification by providing the model with inductive biases derived from assumptions on the generative process of the graphs, or by requiring access to graphs from the test domain. The first strategy is tied to the quality of the assumptions made for the generative process, and requires the use of specific models designed after the explicit definition of the generative process of the data, leaving open the question of how to improve the performance of generic GNN models in general settings. On the other hand, the second strategy can be applied to any GNN, but requires access to information that is not always easy to obtain. In this work we consider the scenario in which we only have access to the training data, and we propose a regularization strategy that can be applied to any GNN to improve its generalization capabilities from smaller to larger graphs without requiring access to the test data. Our regularization is based on the idea of simulating a shift in the size of the training graphs using coarsening techniques, and enforcing the model to be robust to such a shift. Experimental results on standard datasets show that popular GNN models, trained on the 50% smallest graphs in the dataset and tested on the 10% largest graphs, obtain performance improvements of up to 30% when trained with our regularization strategy.
Author Information
Davide Buffelli (University of Padova)
Pietro Lió (University of Cambridge)
Fabio Vandin (University of Padova)
More from the Same Authors
-
2020 : A Meta-Learning Approach for Graph Representation Learning in Multi-Task Settings »
Davide Buffelli -
2021 : Interpretable Data Analysis for Bench-to-Bedside Research »
Zohreh Shams · Botty Dimanov · Nikola Simidjievski · Helena Andres-Terre · Paul Scherer · Urška Matjašec · Mateja Jamnik · Pietro Lió -
2021 : Structure-aware generation of drug-like molecules »
Pavol Drotar · Arian Jamasb · Ben Day · Catalina Cangea · Pietro Lió -
2021 : 3D Pre-training improves GNNs for Molecular Property Prediction »
Hannes Stärk · Dominique Beaini · Gabriele Corso · Prudencio Tossou · Christian Dallago · Stephan Günnemann · Pietro Lió -
2021 : 3D Pre-training improves GNNs for Molecular Property Prediction »
Hannes Stärk · Gabriele Corso · Christian Dallago · Stephan Günnemann · Pietro Lió -
2021 : Approximate Latent Force Model Inference »
Jacob Moss · Felix Opolka · Pietro Lió -
2022 : Learning Feynman Diagrams using Graph Neural Networks »
Alexander Norcliffe · Harrison Mitchell · Pietro Lió -
2022 : A physics-informed search for metric solutions to Ricci flow, their embeddings, and visualisation »
Aarjav Jain · Challenger Mishra · Pietro Lió -
2022 : Improving Classification and Data Imputation for Single-Cell Transcriptomics with Graph Neural Networks »
Han-Bo Li · Ramon Viñas Torné · Pietro Lió -
2022 : Structure-based Drug Design with Equivariant Diffusion Models »
Arne Schneuing · Yuanqi Du · Charles Harris · Arian Jamasb · Ilia Igashov · weitao Du · Tom Blundell · Pietro Lió · Carla Gomes · Max Welling · Michael Bronstein · Bruno Correia -
2022 : A Federated Learning benchmark for Drug-Target Interaction »
Filip Svoboda · Gianluca Mittone · Nicholas Lane · Pietro Lió -
2022 : Benchmarking Graph Neural Network-based Imputation Methods on Single-Cell Transcriptomics Data »
Han-Bo Li · Ramon Viñas Torné · Pietro Lió -
2022 : Sheaf Attention Networks »
Federico Barbero · Cristian Bodnar · Haitz Sáez de Ocáriz Borde · Pietro Lió -
2022 : Human Interventions in Concept Graph Networks »
Lucie Charlotte Magister · Pietro Barbiero · Dmitry Kazhdan · Federico Siciliano · Gabriele Ciravegna · Fabrizio Silvestri · Mateja Jamnik · Pietro Lió -
2022 : Sheaf Attention Networks »
Federico Barbero · Cristian Bodnar · Haitz Sáez de Ocáriz Borde · Pietro Lió -
2022 : Dynamic outcomes-based clustering of disease progression in mechanically ventilated patients »
Emma Rocheteau · Ioana Bica · Pietro Lió · Ari Ercole -
2022 Poster: Concept Embedding Models: Beyond the Accuracy-Explainability Trade-Off »
Mateo Espinosa Zarlenga · Pietro Barbiero · Gabriele Ciravegna · Giuseppe Marra · Francesco Giannini · Michelangelo Diligenti · Zohreh Shams · Frederic Precioso · Stefano Melacci · Adrian Weller · Pietro Lió · Mateja Jamnik -
2022 Poster: Neural Sheaf Diffusion: A Topological Perspective on Heterophily and Oversmoothing in GNNs »
Cristian Bodnar · Francesco Di Giovanni · Benjamin Chamberlain · Pietro Lió · Michael Bronstein -
2022 Poster: Graphein - a Python Library for Geometric Deep Learning and Network Analysis on Biomolecular Structures and Interaction Networks »
Arian Jamasb · Ramon Viñas Torné · Eric Ma · Yuanqi Du · Charles Harris · Kexin Huang · Dominic Hall · Pietro Lió · Tom Blundell -
2022 Poster: Composite Feature Selection Using Deep Ensembles »
Fergus Imrie · Alexander Norcliffe · Pietro Lió · Mihaela van der Schaar -
2021 : Neural ODE Processes: A Short Summary »
Alexander Norcliffe · Cristian Bodnar · Ben Day · Jacob Moss · Pietro Lió -
2021 : On Second Order Behaviour in Augmented Neural ODEs: A Short Summary »
Alexander Norcliffe · Cristian Bodnar · Ben Day · Nikola Simidjievski · Pietro Lió -
2021 : Structure-aware generation of drug-like molecules »
Pavol Drotar · Arian Jamasb · Ben Day · Catalina Cangea · Pietro Lió -
2020 Poster: Constraining Variational Inference with Geometric Jensen-Shannon Divergence »
Jacob Deasy · Nikola Simidjievski · Pietro Lió -
2020 Poster: On Second Order Behaviour in Augmented Neural ODEs »
Alexander Norcliffe · Cristian Bodnar · Ben Day · Nikola Simidjievski · Pietro Lió