`

Timezone: »

 
Poster
Not All Low-Pass Filters are Robust in Graph Convolutional Networks
Heng Chang · Yu Rong · Tingyang Xu · Yatao Bian · Shiji Zhou · Xin Wang · Junzhou Huang · Wenwu Zhu

Thu Dec 09 12:30 AM -- 02:00 AM (PST) @ None #None

Graph Convolutional Networks (GCNs) are promising deep learning approaches in learning representations for graph-structured data. Despite the proliferation of such methods, it is well known that they are vulnerable to carefully crafted adversarial attacks on the graph structure. In this paper, we first conduct an adversarial vulnerability analysis based on matrix perturbation theory. We prove that the low- frequency components of the symmetric normalized Laplacian, which is usually used as the convolutional filter in GCNs, could be more robust against structural perturbations when their eigenvalues fall into a certain robust interval. Our results indicate that not all low-frequency components are robust to adversarial attacks and provide a deeper understanding of the relationship between graph spectrum and robustness of GCNs. Motivated by the theory, we present GCN-LFR, a general robust co-training paradigm for GCN-based models, that encourages transferring the robustness of low-frequency components with an auxiliary neural network. To this end, GCN-LFR could enhance the robustness of various kinds of GCN-based models against poisoning structural attacks in a plug-and-play manner. Extensive experiments across five benchmark datasets and five GCN-based models also confirm that GCN-LFR is resistant to the adversarial attacks without compromising on performance in the benign situation.

Author Information

Heng Chang (Tsinghua University)
Yu Rong (Tencent AI Lab)
Tingyang Xu (Tencent AI Lab)
Yatao Bian (Tencent AI Lab)
Shiji Zhou (Tsinghua-Berkeley Shenzhen Institute, Tsinghua University)
Xin Wang (Tsinghua University)
Junzhou Huang (University of Texas at Arlington / Tencent AI Lab)
Wenwu Zhu (Tsinghua University)

More from the Same Authors