Poster
What do Graph Neural Networks learn? Insights from Tropical Geometry
Tuan Anh Pham · Vikas Garg
East Exhibit Hall A-C #3000
Graph neural networks (GNNs) have been analyzed from multiple perspectives, including the WL-hierarchy, which exposes limits on their expressivity to distinguish graphs. However, characterizing the class of functions that they learn has remained unresolved. We address this fundamental question for message passing GNNs under ReLU activations, i.e., the de-facto choice for most GNNs.We first show that such GNNs learn tropical rational signomial maps, establishing an equivalence with feedforward networks (FNNs).We then elucidate the role of the choice of aggregation and update functions, and derive the first general upper and lower bounds on the geometric complexity (i.e., the number of linear regions), establishing new results for popular architectures such as GraphSAGE and GIN. We also introduce and theoretically analyze several new architectures to illuminate the relative merits of the feedforward and the message passing layers, and the tradeoffs involving depth and number of trainable parameters. Finally, we also characterize the decision boundary for node and graph classification tasks.
Live content is unavailable. Log in and register to view live content