Adversarial robustness in deep learning models for brain tumor classification remains an underexplored yet critical challenge, particularly for clinical deployment scenarios involving MRI data. In this work, we investigate the susceptibility and resilience of several ResNet-based architectures, referred to as BrainNet, BrainNeXt and DilationNet, against gradient-based adversarial attacks, namely FGSM and PGD. These models, based on ResNet, ResNeXt, and dilated ResNet variants respectively, are evaluated across three preprocessing configurations (i) full-sized augmented, (ii) shrunk augmented and (iii) shrunk non-augmented MRI datasets. Our experiments reveal that BrainNeXt models exhibit the highest robustness to black-box attacks, likely due to their increased cardinality, though they produce weaker transferable adversarial samples. In contrast, BrainNet and Dilation models are more vulnerable to attacks from each other, especially under PGD with higher iteration steps and $α$ values. Notably, shrunk and non-augmented data significantly reduce model resilience, even when the untampered test accuracy remains high, highlighting a key trade-off between input resolution and adversarial vulnerability. These results underscore the importance of jointly evaluating classification performance and adversarial robustness for reliable real-world deployment in brain MRI analysis.
翻译:脑肿瘤分类深度学习模型的对抗鲁棒性仍是一个未充分探索但至关重要的挑战,尤其在涉及MRI数据的临床部署场景中。本研究探讨了基于ResNet的几种架构(分别称为BrainNet、BrainNeXt和DilationNet)对基于梯度的对抗攻击(即FGSM和PGD)的脆弱性与抗性。这些模型分别基于ResNet、ResNeXt和扩张ResNet变体,在三种预处理配置下进行评估:(i)全尺寸增强、(ii)缩小尺寸增强和(iii)缩小尺寸非增强的MRI数据集。实验表明,BrainNeXt模型对黑盒攻击表现出最高的鲁棒性,这很可能归因于其增加的基数,尽管它们生成的对抗样本可迁移性较弱。相比之下,BrainNet和Dilation模型更容易受到彼此攻击的影响,尤其是在迭代步数更高、$α$值更大的PGD攻击下。值得注意的是,缩小尺寸且未经增强的数据会显著降低模型的抗性,即使未篡改的测试准确率仍保持较高水平,这揭示了输入分辨率与对抗脆弱性之间的关键权衡。这些结果强调了在脑MRI分析中,为可靠的实际部署需要联合评估分类性能与对抗鲁棒性的重要性。