Deep learning models have shown remarkable success in dermatological image analysis, offering potential for automated skin disease diagnosis. Previously, convolutional neural network(CNN) based architectures have achieved immense popularity and success in computer vision (CV) based task like skin image recognition, generation and video analysis. But with the emergence of transformer based models, CV tasks are now are nowadays carrying out using these models. Vision Transformers (ViTs) is such a transformer-based models that have shown success in computer vision. It uses self-attention mechanisms to achieve state-of-the-art performance across various tasks. However, their reliance on global attention mechanisms makes them susceptible to adversarial perturbations. This paper aims to investigate the susceptibility of ViTs for medical images to adversarial watermarking-a method that adds so-called imperceptible perturbations in order to fool models. By generating adversarial watermarks through Projected Gradient Descent (PGD), we examine the transferability of such attacks to CNNs and analyze the performance defense mechanism -- adversarial training. Results indicate that while performance is not compromised for clean images, ViTs certainly become much more vulnerable to adversarial attacks: an accuracy drop of as low as 27.6%. Nevertheless, adversarial training raises it up to 90.0%.
翻译:深度学习模型在皮肤病学图像分析中展现出卓越成效,为自动化皮肤疾病诊断提供了潜力。以往基于卷积神经网络(CNN)的架构在皮肤图像识别、生成及视频分析等计算机视觉任务中已取得广泛成功。然而随着基于Transformer的模型兴起,当前计算机视觉任务正逐步转向采用此类模型。视觉Transformer(ViT)作为基于Transformer的典型模型,已在计算机视觉领域取得成功。它通过自注意力机制在多项任务中实现了最先进的性能。但因其依赖全局注意力机制,这类模型易受对抗性扰动的影响。本文旨在研究ViT在医学图像领域对对抗性水印的敏感性——该方法通过添加所谓不可感知的扰动来欺骗模型。通过使用投影梯度下降法(PGD)生成对抗性水印,我们检验了此类攻击向CNN的可迁移性,并分析了对抗训练这一防御机制的性能表现。结果表明:虽然干净图像的性能未受影响,但ViT确实更易受对抗性攻击——准确率最低降至27.6%。而对抗训练可将其提升至90.0%。