Adversarial robustness is a critical challenge in deploying deep neural networks for real-world applications. While adversarial training is a widely recognized defense strategy, most existing studies focus on balanced datasets, overlooking the prevalence of long-tailed distributions in real-world data, which significantly complicates robustness. This paper provides a comprehensive analysis of adversarial training under long-tailed distributions and identifies limitations in the current state-of-the-art method, AT-BSL, in achieving robust performance under such conditions. To address these challenges, we propose a novel training framework, TAET, which integrates an initial stabilization phase followed by a stratified equalization adversarial training phase. Additionally, prior work on long-tailed robustness has largely ignored the crucial evaluation metric of balanced accuracy. To bridge this gap, we introduce the concept of balanced robustness, a comprehensive metric tailored for assessing robustness under long-tailed distributions. Extensive experiments demonstrate that our method surpasses existing advanced defenses, achieving significant improvements in both memory and computational efficiency. This work represents a substantial advancement in addressing robustness challenges in real-world applications. Our code is available at: https://github.com/BuhuiOK/TAET-Two-Stage-Adversarial-Equalization-Training-on-Long-Tailed-Distributions.
翻译:对抗鲁棒性是深度神经网络在现实世界应用中部署的关键挑战。尽管对抗训练是一种广泛认可的防御策略,但现有研究大多集中于平衡数据集,忽视了现实数据中普遍存在的长尾分布问题,这显著增加了鲁棒性实现的复杂性。本文系统分析了长尾分布下的对抗训练,并指出当前最先进方法AT-BSL在此类条件下实现鲁棒性能的局限性。为应对这些挑战,我们提出了一种新颖的训练框架TAET,该框架整合了初始稳定化阶段与分层均衡对抗训练阶段。此外,先前关于长尾鲁棒性的研究普遍忽略了平衡精度这一关键评估指标。为弥补这一空白,我们提出了平衡鲁棒性概念——一种专为评估长尾分布下鲁棒性而设计的综合指标。大量实验表明,我们的方法超越了现有先进防御方案,在内存与计算效率方面均取得显著提升。本工作对解决现实应用中的鲁棒性挑战具有实质性推进意义。代码已开源:https://github.com/BuhuiOK/TAET-Two-Stage-Adversarial-Equalization-Training-on-Long-Tailed-Distributions。