Timezone: »

 
Sheaf Attention Networks
Federico Barbero · Cristian Bodnar · Haitz Sáez de Ocáriz Borde · Pietro Lió

Sat Dec 03 09:25 AM -- 09:30 AM (PST) @
Event URL: https://openreview.net/forum?id=LIDvgVjpkZr »

Attention has become a central inductive bias for deep learning models irrespective of domain. However, increasing theoretical and empirical evidence suggests that Graph Attention Networks (GATs) suffer from the same pathological issues affecting many other Graph Neural Networks (GNNs). First, GAT's features tend to become progressively smoother as more layers are stacked, and second, the model performs poorly in heterophilic graphs. Sheaf Neural Networks (SNNs), a new class of models inspired by algebraic topology and geometry, have shown much promise in tackling these two issues. Building upon the recent success of SNNs and the wide adoption of attention-based architectures, we propose Sheaf Attention Networks (SheafANs). By making use of a novel and more expressive attention mechanism equipped with geometric inductive biases, we show that this type of construction generalizes popular attention-based GNN models to cellular sheaves. We demonstrate that these models help tackle the oversmoothing and heterophily problems and show that, in practice, SheafANs consistently outperform GAT on synthetic and real-world benchmarks.

Author Information

Federico Barbero (University of Oxford)
Cristian Bodnar (University of Cambridge)
Haitz Sáez de Ocáriz Borde (University of Cambridge)
Pietro Lió (University of Cambridge)

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors