With the development and application of deep learning in signal detection tasks, the vulnerability of neural networks to adversarial attacks has also become a security threat to signal detection networks. This paper defines a signal adversarial examples generation model for signal detection network from the perspective of adding perturbations to the signal. The model uses the inequality relationship of L2-norm between time domain and time-frequency domain to constrain the energy of signal perturbations. Building upon this model, we propose a method for generating signal adversarial examples utilizing gradient-based attacks and Short-Time Fourier Transform. The experimental results show that under the constraint of signal perturbation energy ratio less than 3%, our adversarial attack resulted in a 28.1% reduction in the mean Average Precision (mAP), a 24.7% reduction in recall, and a 30.4% reduction in precision of the signal detection network. Compared to random noise perturbation of equivalent intensity, our adversarial attack demonstrates a significant attack effect.
翻译:随着深度学习在信号检测任务中的发展与广泛应用,神经网络对对抗攻击的脆弱性已成为信号检测网络面临的安全威胁。本文从信号添加扰动的角度,定义了一种面向信号检测网络的信号对抗样本生成模型。该模型利用时域与时频域L2范数的不等式关系约束信号扰动的能量。基于此模型,我们提出了一种利用基于梯度的攻击与短时傅里叶变换生成信号对抗样本的方法。实验结果表明,在信号扰动能量比低于3%的约束条件下,我们的对抗攻击使信号检测网络的平均精度均值(mAP)降低了28.1%,召回率降低了24.7%,精确率降低了30.4%。与同等强度的随机噪声扰动相比,我们的对抗攻击展现出显著的攻击效果。