The GNN as a Low-Pass Filter: A Spectral Perspective on Achieving Stability in Neural PDE Solvers
Abstract
The choice of architecture in Graph Machine Learning (GML) presents a fundamental trade-off between the expressive power of universal approximators and the implicit regularization conferred by structured models like Graph Neural Networks (GNNs). This paper provides a principled framework for navigating this trade-off, using the challenging scientific domain of solving high-dimensional Hamilton-Jacobi-Bellman (HJB) partial differential equations as a testbed. Through a series of controlled experiments, we demonstrate that while flexible, unstructured networks excel for problems with smooth, globally-structured solutions, they fail catastrophically on problems with complex, non-smooth features. We connect this success to the GNN's established properties as a spectral low-pass filter, demonstrating how this provides the implicit Lipschitz regularization needed to learn stable and generalizable solutions. This prevents the numerical instabilities that plague unconstrained models. Our findings culminate in a framework that connects the mathematical properties of a problem's solution space to the optimal choice of GML architecture, offering a new perspective on the role of architectural bias as a powerful regularization tool.