Skip to yearly menu bar Skip to main content


Poster

Generalizing CNNs to graphs with learnable neighborhood quantization

Isaac Osafo Nkansah · Ruchi Sandilya · Neil Gallagher · Conor Liston · Logan Grosenick

East Exhibit Hall A-C #3000
[ ] [ Project Page ]
Fri 13 Dec 11 a.m. PST — 2 p.m. PST

Abstract:

Convolutional neural networks (CNNs) have led to a revolution in analyzing array data. However, many important sources of data, such as biological and social networks, are naturally structured as graphs rather than dense arrays, making the design of graph convolutional network (GCN) architectures that retain the strengths of CNNs an active and exciting area of research. Here, we introduce Quantized Graph Convolution Networks (QGCNs), the first framework for GCNs that directly extend CNNs, by decomposing the convolution operation into non-overlapping sub-kernels. We show that a QGCN is provably identical to a 2D CNN layer on a local neighborhood of pixels. We then generalize this approach to graphs of arbitrary dimension by approaching sub-kernel assignment as a learnable multinomial assignment problem. Integrating this approach into a residual network architecture, we demonstrate performance that matches or exceeds other state-of-the-art GCNs on both a suite of benchmark datasets for graphical methods and on a new benchmark dataset we introduce for predicting properties of the flow past a cylinder problem. In summary we present state-of-the-art results using QGCNs, a novel GCN framework that generalizes CNNs and their strengths to graph data.

Live content is unavailable. Log in and register to view live content