Timezone: »

Sparse DNNs with Improved Adversarial Robustness
Yiwen Guo · Chao Zhang · Changshui Zhang · Yurong Chen

Thu Dec 06 02:00 PM -- 04:00 PM (PST) @ Room 210 #21
Deep neural networks (DNNs) are computationally/memory-intensive and vulnerable to adversarial attacks, making them prohibitive in some real-world applications. By converting dense models into sparse ones, pruning appears to be a promising solution to reducing the computation/memory cost. This paper studies classification models, especially DNN-based ones, to demonstrate that there exists intrinsic relationships between their sparsity and adversarial robustness. Our analyses reveal, both theoretically and empirically, that nonlinear DNN-based classifiers behave differently under $l_2$ attacks from some linear ones. We further demonstrate that an appropriately higher model sparsity implies better robustness of nonlinear DNNs, whereas over-sparsified models can be more difficult to resist adversarial examples.

Author Information

Yiwen Guo (Intel Labs China)
Chao Zhang (Peking University)
Changshui Zhang (Tsinghua University)
Yurong Chen (Intel Labs China)

More from the Same Authors