Poster
Cross-Modal Learning with Adversarial Samples
CHAO LI · Shangqian Gao · Cheng Deng · De Xie · Wei Liu

Wed Dec 11th 05:00 -- 07:00 PM @ East Exhibition Hall B + C #92

With the rapid developments of deep neural networks, numerous deep cross-modal analysis methods have been presented and are being applied in widespread real-world applications, including healthcare and safety-critical environments. However, the recent studies on robustness and stability of deep neural networks show that a microscopic modification, known as adversarial sample, which is even imperceptible to humans, can easily fool a well-performed deep neural network and brings a new obstacle to deep cross-modal correlation exploring. In this paper, we propose a novel Cross-Modal correlation Learning with Adversarial samples, namely CMLA, which for the first time presents the existence of adversarial samples in cross-modal data. Moreover, we provide a simple yet effective adversarial sample learning method, where inter- and intra- modality similarity regularizations across different modalities are simultaneously integrated into the learning of adversarial samples. Finally, our proposed CMLA is demonstrated to be highly effective in cross-modal hashing based retrieval. Extensive experiments on two cross-modal benchmark datasets show that the adversarial examples produced by our CMLA are efficient in fooling a target deep cross-modal hashing network. On the other hand, such adversarial examples can significantly strengthen the robustness of the target network by conducting an adversarial training.

Author Information

CHAO LI (Xidian University)
Shangqian Gao (University of Pittsburgh)
Cheng Deng (Xidian University)
De Xie (XiDian University)
Wei Liu (Tencent AI Lab)

More from the Same Authors