Timezone: »
Graph Neural Networks (GNN) come in many flavors, but should always be either invariant (permutation of the nodes of the input graph does not affect the output) or \emph{equivariant} (permutation of the input permutes the output). In this paper, we consider a specific class of invariant and equivariant networks, for which we prove new universality theorems. More precisely, we consider networks with a single hidden layer, obtained by summing channels formed by applying an equivariant linear operator, a pointwise non-linearity, and either an invariant or equivariant linear output layer. Recently, Maron et al. (2019) showed that by allowing higher-order tensorization inside the network, universal invariant GNNs can be obtained. As a first contribution, we propose an alternative proof of this result, which relies on the Stone-Weierstrass theorem for algebra of real-valued functions. Our main contribution is then an extension of this result to the \emph{equivariant} case, which appears in many practical applications but has been less studied from a theoretical point of view. The proof relies on a new generalized Stone-Weierstrass theorem for algebra of equivariant functions, which is of independent interest. Additionally, unlike many previous works that consider a fixed number of nodes, our results show that a GNN defined by a single set of parameters can approximate uniformly well a function defined on graphs of varying size.
Author Information
Nicolas Keriven (Ecole Normale Supérieure)
Gabriel Peyré (CNRS and ENS)
More from the Same Authors
-
2023 Poster: Abide by the law and follow the flow: conservation laws for gradient flows »
Sibylle Marcotte · Remi Gribonval · Gabriel Peyré -
2023 Oral: Abide by the law and follow the flow: conservation laws for gradient flows »
Sibylle Marcotte · Remi Gribonval · Gabriel Peyré -
2022 Poster: On global convergence of ResNets: From finite to infinite width using linear parameterization »
Raphaël Barboni · Gabriel Peyré · Francois-Xavier Vialard -
2022 Poster: Do Residual Neural Networks discretize Neural Ordinary Differential Equations? »
Michael Sander · Pierre Ablin · Gabriel Peyré -
2021 Workshop: Optimal Transport and Machine Learning »
Jason Altschuler · Charlotte Bunne · Laetitia Chapel · Marco Cuturi · Rémi Flamary · Gabriel Peyré · Alexandra Suvorikova -
2021 Poster: On the Universality of Graph Neural Networks on Large Random Graphs »
Nicolas Keriven · Alberto Bietti · Samuel Vaiter -
2020 Poster: Faster Wasserstein Distance Estimation with the Sinkhorn Divergence »
Lénaïc Chizat · Pierre Roussillon · Flavien Léger · François-Xavier Vialard · Gabriel Peyré -
2020 Poster: Online Sinkhorn: Optimal Transport distances from sample streams »
Arthur Mensch · Gabriel Peyré -
2020 Oral: Online Sinkhorn: Optimal Transport distances from sample streams »
Arthur Mensch · Gabriel Peyré -
2020 Poster: Entropic Optimal Transport between Unbalanced Gaussian Measures has a Closed Form »
Hicham Janati · Boris Muzellec · Gabriel Peyré · Marco Cuturi -
2020 Oral: Entropic Optimal Transport between Unbalanced Gaussian Measures has a Closed Form »
Hicham Janati · Boris Muzellec · Gabriel Peyré · Marco Cuturi -
2019 Workshop: Optimal Transport for Machine Learning »
Marco Cuturi · Gabriel Peyré · Rémi Flamary · Alexandra Suvorikova