Deep learning algorithms have been shown to be powerful in many communication network design problems, including that in automatic modulation classification. However, they are vulnerable to carefully crafted attacks called adversarial examples. Hence, the reliance of wireless networks on deep learning algorithms poses a serious threat to the security and operation of wireless networks. In this letter, we propose for the first time a countermeasure against adversarial examples in modulation classification. Our countermeasure is based on a neural rejection technique, augmented by label smoothing and Gaussian noise injection, that allows to detect and reject adversarial examples with high accuracy. Our results demonstrate that the proposed countermeasure can protect deep-learning based modulation classification systems against adversarial examples.
翻译:深度学习算法已在众多通信网络设计问题中展现出强大能力,包括自动调制分类领域。然而,这些算法易受精心构造的对抗样本攻击。因此,无线网络对深度学习算法的依赖给其安全与运行带来了严重威胁。本文首次提出一种针对调制分类中对抗样本的防御机制。该防御机制基于神经拒绝技术,并辅以标签平滑与高斯噪声注入策略,能够以高精度检测并拒绝对抗样本。实验结果表明,所提出的防御机制能有效保护基于深度学习的调制分类系统免受对抗样本攻击。