Medical hyperspectral imaging (HSI) enables accurate disease diagnosis by capturing rich spectral-spatial tissue information, but recent advances in deep learning have exposed its vulnerability to adversarial attacks. In this work, we identify two fundamental causes of this fragility: the reliance on local pixel dependencies for preserving tissue structure and the dependence on multiscale spectral-spatial representations for hierarchical feature encoding. Building on these insights, we propose a targeted adversarial attack framework for medical HSI, consisting of a Local Pixel Dependency Attack that exploits spatial correlations among neighboring pixels, and a Multiscale Information Attack that perturbs features across hierarchical spectral-spatial scales. Experiments on the Brain and MDC datasets demonstrate that our attacks significantly degrade classification performance, especially in tumor regions, while remaining visually imperceptible. Compared with existing methods, our approach reveals the unique vulnerabilities of medical HSI models and underscores the need for robust, structure-aware defenses in clinical applications.
翻译:医学高光谱成像(HSI)通过捕获丰富的谱-空组织信息实现精准疾病诊断,但深度学习的最新进展暴露了其易受对抗性攻击的脆弱性。本研究识别了这种脆弱性的两个根本原因:对局部像素依赖性以保持组织结构的依赖,以及对多尺度谱-空表征进行层次特征编码的依赖。基于这些发现,我们提出了一种针对医学HSI的定向对抗攻击框架,包含利用相邻像素空间相关性的局部像素依赖性攻击,以及在层次化谱-空尺度上扰动特征的多尺度信息攻击。在Brain和MDC数据集上的实验表明,我们的攻击显著降低了分类性能(尤其在肿瘤区域),同时保持视觉不可感知性。与现有方法相比,本方法揭示了医学HSI模型特有的脆弱性,并强调了临床应用中需要构建具备结构感知能力的鲁棒防御机制。