Skip to yearly menu bar Skip to main content


Poster

UniT: A Unified Look at Certified Robust Training against Text Adversarial Perturbation

Muchao Ye · Ziyi Yin · Tianrong Zhang · Tianyu Du · Tianyu Du · Jinghui Chen · Ting Wang · Fenglong Ma

Great Hall & Hall B1+B2 (level 1) #1700

Abstract:

Recent years have witnessed a surge of certified robust training pipelines against text adversarial perturbation constructed by synonym substitutions. Given a base model, existing pipelines provide prediction certificates either in the discrete word space or the continuous latent space. However, they are isolated from each other with a structural gap. We observe that existing training frameworks need unification to provide stronger certified robustness. Additionally, they mainly focus on building the certification process but neglect to improve the robustness of the base model. To mitigate the aforementioned limitations, we propose a unified framework named UniT that enables us to train flexibly in either fashion by working in the word embedding space. It can provide a stronger robustness guarantee obtained directly from the word embedding space without extra modules. In addition, we introduce the decoupled regularization (DR) loss to improve the robustness of the base model, which includes two separate robustness regularization terms for the feature extraction and classifier modules. Experimental results on widely used text classification datasets further demonstrate the effectiveness of the designed unified framework and the proposed DR loss for improving the certified robust accuracy.

Chat is not available.