Timezone: »

Weisfeiler and Lehman Go Cellular: CW Networks
Cristian Bodnar · Fabrizio Frasca · Nina Otter · Yuguang Wang · Pietro Liò · Guido Montufar · Michael Bronstein

Tue Dec 07 08:30 AM -- 10:00 AM (PST) @

Graph Neural Networks (GNNs) are limited in their expressive power, struggle with long-range interactions and lack a principled way to model higher-order structures. These problems can be attributed to the strong coupling between the computational graph and the input graph structure. The recently proposed Message Passing Simplicial Networks naturally decouple these elements by performing message passing on the clique complex of the graph. Nevertheless, these models can be severely constrained by the rigid combinatorial structure of Simplicial Complexes (SCs). In this work, we extend recent theoretical results on SCs to regular Cell Complexes, topological objects that flexibly subsume SCs and graphs. We show that this generalisation provides a powerful set of graph "lifting" transformations, each leading to a unique hierarchical message passing procedure. The resulting methods, which we collectively call CW Networks (CWNs), are strictly more powerful than the WL test and not less powerful than the 3-WL test. In particular, we demonstrate the effectiveness of one such scheme, based on rings, when applied to molecular graph problems. The proposed architecture benefits from provably larger expressivity than commonly used GNNs, principled modelling of higher-order signals and from compressing the distances between nodes. We demonstrate that our model achieves state-of-the-art results on a variety of molecular datasets.

Author Information

Cristian Bodnar (University of Cambridge)
Fabrizio Frasca (Twitter)
Nina Otter (UCLA)
Yuguang Wang (Shanghai Jiao Tong University; University of New South Wales)
Pietro Liò (University of Cambridge)
Guido Montufar (UCLA / MPI MIS)

Guido is an Assistant Professor at the Departments of Mathematics and Statistics at the University of California, Los Angeles (UCLA), and he is the principal investigator in the ERC Project „Deep Learning Theory: Geometric Analysis of Capacity, Optimization, and Generalization for Improving Learning in Deep Neural Networks" at the Max Planck Institute for Mathematics in the Sciences. Guido Montúfar is interested in mathematical machine learning, especially the interplay of model capacity, optimization, and generalization in deep learning. In current projects, he investigates optimization landscapes and regularization strategies for neural networks, intrinsic motivation in reinforcement learning, information theoretic approaches to learning data representations, information geometric and optimal transportation approaches to generative modelling, and algebraic geometric approaches to graphical models with hidden variables.

Michael Bronstein (Imperial College London / Twitter)

More from the Same Authors