Timezone: »

Sketch-GNN: Scalable Graph Neural Networks with Sublinear Training Complexity
Mucong Ding · Tahseen Rabbani · Bang An · Evan Wang · Furong Huang

Thu Dec 01 02:00 PM -- 04:00 PM (PST) @ Hall J #220

Graph Neural Networks (GNNs) are widely applied to graph learning problems such as node classification. When scaling up the underlying graphs of GNNs to a larger size, we are forced to either train on the complete graph and keep the full graph adjacency and node embeddings in memory (which is often infeasible) or mini-batch sample the graph (which results in exponentially growing computational complexities with respect to the number of GNN layers). Various sampling-based and historical-embedding-based methods are proposed to avoid this exponential growth of complexities. However, none of these solutions eliminates the linear dependence on graph size. This paper proposes a sketch-based algorithm whose training time and memory grow sublinearly with respect to graph size by training GNNs atop a few compact sketches of graph adjacency and node embeddings. Based on polynomial tensor-sketch (PTS) theory, our framework provides a novel protocol for sketching non-linear activations and graph convolution matrices in GNNs, as opposed to existing methods that sketch linear weights or gradients in neural networks. In addition, we develop a locality-sensitive hashing (LSH) technique that can be trained to improve the quality of sketches. Experiments on large-graph benchmarks demonstrate the scalability and competitive performance of our Sketch-GNNs versus their full-size GNN counterparts.

Author Information

Mucong Ding (Department of Computer Science, University of Maryland, College Park)
Tahseen Rabbani (University of Maryland, College Park)
Bang An (University of Maryland, College Park)
Evan Wang (California Institute of Technology)
Furong Huang (University of Maryland)

More from the Same Authors