Deep learning techniques have achieved superior performance in computer-aided medical image analysis, yet they are still vulnerable to imperceptible adversarial attacks, resulting in potential misdiagnosis in clinical practice. Oppositely, recent years have also witnessed remarkable progress in defense against these tailored adversarial examples in deep medical diagnosis systems. In this exposition, we present a comprehensive survey on recent advances in adversarial attacks and defenses for medical image analysis with a systematic taxonomy in terms of the application scenario. We also provide a unified framework for different types of adversarial attack and defense methods in the context of medical image analysis. For a fair comparison, we establish a new benchmark for adversarially robust medical diagnosis models obtained by adversarial training under various scenarios. To the best of our knowledge, this is the first survey paper that provides a thorough evaluation of adversarially robust medical diagnosis models. By analyzing qualitative and quantitative results, we conclude this survey with a detailed discussion of current challenges for adversarial attack and defense in medical image analysis systems to shed light on future research directions. Code is available on \href{https://github.com/tomvii/Adv_MIA}{\color{red}{GitHub}}.
翻译:深度学习技术在计算机辅助医学图像分析中已取得卓越性能,然而其仍易受难以察觉的对抗攻击的影响,可能导致临床实践中的误诊。相反,近年来针对深度医疗诊断系统中定制化对抗样本的防御研究也取得了显著进展。本文系统梳理了医学图像分析领域对抗攻击与防御的最新进展,并依据应用场景构建了系统化分类体系。我们进一步提出了适用于医学图像分析场景的统一对抗攻击与防御方法框架。为进行公平比较,我们建立了通过不同场景下对抗训练获得的抗对抗医疗诊断模型新基准。据我们所知,这是首篇对具有抗对抗性的医疗诊断模型进行全面评估的综述论文。通过定性与定量结果分析,本文最后深入探讨了当前医学图像分析系统中对抗攻击与防御面临的核心挑战,以期为未来研究方向提供启示。相关代码已发布于\href{https://github.com/tomvii/Adv_MIA}{\color{red}{GitHub}}平台。