Deep learning-based person re-identification (re-id) models are widely employed in surveillance systems and inevitably inherit the vulnerability of deep networks to adversarial attacks. Existing attacks merely consider cross-dataset and cross-model transferability, ignoring the cross-test capability to perturb models trained in different domains. To powerfully examine the robustness of real-world re-id models, the Meta Transferable Generative Attack (MTGA) method is proposed, which adopts meta-learning optimization to promote the generative attacker producing highly transferable adversarial examples by learning comprehensively simulated transfer-based cross-model\&dataset\&test black-box meta attack tasks. Specifically, cross-model\&dataset black-box attack tasks are first mimicked by selecting different re-id models and datasets for meta-train and meta-test attack processes. As different models may focus on different feature regions, the Perturbation Random Erasing module is further devised to prevent the attacker from learning to only corrupt model-specific features. To boost the attacker learning to possess cross-test transferability, the Normalization Mix strategy is introduced to imitate diverse feature embedding spaces by mixing multi-domain statistics of target models. Extensive experiments show the superiority of MTGA, especially in cross-model\&dataset and cross-model\&dataset\&test attacks, our MTGA outperforms the SOTA methods by 21.5\% and 11.3\% on mean mAP drop rate, respectively. The code of MTGA will be released after the paper is accepted.
翻译:基于深度学习的行人再识别模型已广泛应用于监控系统,并不可避免地继承了深度网络对对抗攻击的脆弱性。现有攻击方法仅考虑跨数据集与跨模型的可迁移性,忽视了扰动不同领域训练模型的跨测试能力。为有效检验实际场景中再识别模型的鲁棒性,本文提出元可迁移生成式攻击方法,该方法采用元学习优化策略,通过全面模拟基于迁移的跨模型&数据集&测试黑盒元攻击任务,促使生成式攻击器产生具有高度可迁移性的对抗样本。具体而言,首先通过为元训练和元测试攻击过程选择不同的再识别模型与数据集,模拟跨模型&数据集黑盒攻击任务。鉴于不同模型可能关注不同特征区域,进一步设计扰动随机擦除模块以防止攻击器仅学习破坏模型特定特征。为增强攻击器习得跨测试可迁移性,引入归一化混合策略,通过混合目标模型的多领域统计量来模拟多样化的特征嵌入空间。大量实验表明MTGA的优越性,特别是在跨模型&数据集和跨模型&数据集&测试攻击中,我们的MTGA在平均mAP下降率指标上分别以21.5%和11.3%的优势超越现有最优方法。MTGA的代码将在论文录用后公开。