Adversarial attacks in time series classification (TSC) models have recently gained attention due to their potential to compromise model robustness. Imperceptibility is crucial, as adversarial examples detected by the human vision system (HVS) can render attacks ineffective. Many existing methods fail to produce high-quality imperceptible examples, often generating perturbations with more perceptible low-frequency components, like square waves, and global perturbations that reduce stealthiness. This paper aims to improve the imperceptibility of adversarial attacks on TSC models by addressing frequency components and time series locality. We propose the Shapelet-based Frequency-domain Attack (SFAttack), which uses local perturbations focused on time series shapelets to enhance discriminative information and stealthiness. Additionally, we introduce a low-frequency constraint to confine perturbations to high-frequency components, enhancing imperceptibility.
翻译:时序分类模型的对抗攻击近年来因其可能损害模型鲁棒性而受到关注。不可感知性至关重要,因为被人类视觉系统检测到的对抗样本可能导致攻击失效。现有方法大多难以生成高质量的不可感知样本,通常产生包含更易感知低频分量(如方波)的扰动,以及降低隐蔽性的全局扰动。本文旨在通过处理频域分量与时间序列局部性,提升时序分类模型对抗攻击的不可感知性。我们提出基于形状子序列的频域攻击方法,该方法利用聚焦于时间序列形状子序列的局部扰动来增强判别信息与隐蔽性。此外,我们引入低频约束机制,将扰动限制在高频分量范围内,从而提升不可感知性。