With the great capabilities of deep classifiers for radar data processing come the risks of learning dataset-specific features that do not generalize well. In this work, the robustness of two deep convolutional architectures, trained and tested on the same data, is evaluated. When standard training practice is followed, both classifiers exhibit sensitivity to subtle temporal shifts of the input representation, an augmentation that carries minimal semantic content. Furthermore, the models are extremely susceptible to adversarial examples. Both small temporal shifts and adversarial examples are a result of a model overfitting on features that do not generalize well. As a remedy, it is shown that training on adversarial examples and temporally augmented samples can reduce this effect and lead to models that generalise better. Finally, models operating on cadence-velocity diagram representation rather than Doppler-time are demonstrated to be naturally more immune to adversarial examples.
翻译:尽管深度分类器在雷达数据处理中展现出强大能力,但其存在学习数据集特定特征而导致泛化能力不足的风险。本研究评估了两种深度卷积架构在相同数据集训练和测试时的鲁棒性。采用标准训练流程时,两种分类器均对输入表征的细微时间偏移(一种语义信息极少的增强手段)表现出敏感性。此外,模型极易受到对抗样本攻击。这些微小时间偏移与对抗样本均源于模型对泛化性差的特征的过拟合。为此,本文证明通过对抗样本训练和时间增强样本训练可有效缓解该问题,从而提升模型的泛化能力。最终实验表明,相较于多普勒-时间表征,采用步进-速度图表征的模型天然具有更强的对抗样本免疫能力。