Certified Robustness of Graph Convolution Networks for Graph Classification under Topological Attacks
Hongwei Jin, Zhan Shi, Ashish Peruri, Xinhua Zhang
Spotlight presentation: Orals & Spotlights Track 26: Graph/Relational/Theory
on 2020-12-10T07:30:00-08:00 - 2020-12-10T07:40:00-08:00
on 2020-12-10T07:30:00-08:00 - 2020-12-10T07:40:00-08:00
Poster Session 6 (more posters)
on 2020-12-10T09:00:00-08:00 - 2020-12-10T11:00:00-08:00
GatherTown: Deep Learning ( Town C3 - Spot A1 )
on 2020-12-10T09:00:00-08:00 - 2020-12-10T11:00:00-08:00
GatherTown: Deep Learning ( Town C3 - Spot A1 )
Join GatherTown
Only iff poster is crowded, join Zoom . Authors have to start the Zoom call from their Profile page / Presentation History.
Only iff poster is crowded, join Zoom . Authors have to start the Zoom call from their Profile page / Presentation History.
Toggle Abstract Paper (in Proceedings / .pdf)
Abstract: Graph convolution networks (GCNs) have become effective models for graph classification. Similar to many deep networks, GCNs are vulnerable to adversarial attacks on graph topology and node attributes. Recently, a number of effective attack and defense algorithms have been developed, but certificates of robustness against \emph{topological perturbations} are currently available only for PageRank and label/feature propagation, while none has been designed for GCNs. We propose the first algorithm for certifying the robustness of GCNs to topological attacks in the application of \emph{graph classification}. Our method is based on Lagrange dualization and convex envelope, which result in tight approximation bounds that are efficiently computable by dynamic programming. When used in conjunction with robust training, it allows an increased number of graphs to be certified as robust.