基于时移变换的对抗攻击方法
首发时间:2023-02-01
摘要:基于深度神经网络的自动调制识别模型具有自主分析、自动特征提取等优势,是非协作通信中认知无线电的关键技术。但相关研究表明深度神经网络非常脆弱,极易受到对抗样本的攻击。本文以黑盒攻击为着手点,研究对抗样本的可迁移性,提出一种基于时移变换的对抗攻击方法,该方法在对抗样本生成过程中,对原始输入信号进行随机时移,以减轻对抗样本与模型的"过拟合"。在公开数据集上的实验结果表明,所提出的方法与传统梯度类攻击方法相比,在白盒攻击情况下,具有相似的攻击效果,但在黑盒攻击情况下,具有更高的攻击成功率,生成的对抗样本具有更好的可迁移性。
For information in English, please click here
Adversarial attack method based on time shift transformation
Abstract:The automatic modulation recognition model based on deep neural network has the advantages of autonomous analysis and automatic feature extraction, and is the key technology of cognitive radio in non-cooperative communication. However, related studies have shown that deep neural networks are very fragile and are extremely vulnerable to attacks from adversarial examples. This paper starts with black-box attack, studies the transferability of adversarial samples, and proposes an adversarial attack method based on time-shift transformation. In the process of adversarial sample generation, the original input signal is randomly time-shifted by the method to alleviate the overfitting of the adversarial samples and the model. Experimental results on public datasets show that compared with traditional gradient-based attack methods, the proposed method has similar attack effects in the case of white-box attacks, but higher attack success rate in the case of black-box attacks, the generated adversarial examples have better transferability.
Keywords: deep neural network adversarial attack modulation recognition transferability
基金:
引用
No.****
动态公开评议
共计0人参与
勘误表
基于时移变换的对抗攻击方法
评论
全部评论