Diabetic Retinopathy (DR) is a leading cause of vision loss around the world. To help diagnose it, numerous cutting-edge works have built powerful deep neural networks (DNNs) to automatically grade DR via retinal fundus images (RFIs). However, RFIs are commonly affected by camera exposure issues that may lead to incorrect grades. The mis-graded results can potentially pose high risks to an aggravation of the condition. In this paper, we study this problem from the viewpoint of adversarial attacks. We identify and introduce a novel solution to an entirely new task, termed as adversarial exposure attack, which is able to produce natural exposure images and mislead the state-of-the-art DNNs. We validate our proposed method on a real-world public DR dataset with three DNNs, e.g., ResNet50, MobileNet, and EfficientNet, demonstrating that our method achieves high image quality and success rate in transferring the attacks. Our method reveals the potential threats to DNN-based automatic DR grading and would benefit the development of exposure-robust DR grading methods in the future.
翻译:糖尿病视网膜病变(DR)是全球范围内导致视力丧失的主要原因之一。为辅助其诊断,大量前沿研究构建了强大的深度神经网络(DNN),通过视网膜眼底图像(RFI)对DR进行自动分级。然而,RFI常受相机曝光问题的影响,可能导致分级错误。错误的分级结果可能对病情恶化构成高风险。本文从对抗攻击的角度研究该问题。我们针对一项全新任务——称为对抗性曝光攻击——提出并引入了一种新颖解决方案,该方法能够生成自然的曝光图像并误导最先进的DNN。我们在真实世界的公共DR数据集上使用三种DNN(如ResNet50、MobileNet和EfficientNet)验证了所提方法,结果表明我们的方法在实现攻击迁移时具有高图像质量和高成功率。本方法揭示了基于DNN的自动DR分级面临的潜在威胁,并将有助于未来开发具有曝光鲁棒性的DR分级方法。