Accurate and reliable image classification is crucial in radiology, where diagnostic decisions significantly impact patient outcomes. Conventional deep learning models tend to produce overconfident predictions despite underlying uncertainties, potentially leading to misdiagnoses. Attention mechanisms have emerged as powerful tools in deep learning, enabling models to focus on relevant parts of the input data. Combined with feature fusion, they can be effective in addressing uncertainty challenges. Cross-attention has become increasingly important in medical image analysis for capturing dependencies across features and modalities. This paper proposes a novel dual cross-attention fusion model for medical image analysis by addressing key challenges in feature integration and interpretability. Our approach introduces a bidirectional cross-attention mechanism with refined channel and spatial attention that dynamically fuses feature maps from EfficientNetB4 and ResNet34 leveraging multi-network contextual dependencies. The refined features through channel and spatial attention highlights discriminative patterns crucial for accurate classification. The proposed model achieved AUC of 99.75%, 100%, 99.93% and 98.69% and AUPR of 99.81%, 100%, 99.97%, and 96.36% on Covid-19, Tuberculosis, Pneumonia Chest X-ray images and Retinal OCT images respectively. The entropy values and several high uncertain samples give an interpretable visualization from the model enhancing transparency. By combining multi-scale feature extraction, bidirectional attention and uncertainty estimation, our proposed model strongly impacts medical image analysis.
翻译:准确可靠的图像分类在放射学中至关重要,其诊断决策显著影响患者预后。传统的深度学习模型倾向于产生过度自信的预测,而忽略潜在的不确定性,可能导致误诊。注意力机制已成为深度学习的强大工具,使模型能够聚焦输入数据的相关部分。结合特征融合技术,它们能有效应对不确定性挑战。交叉注意力在医学图像分析中日益重要,用于捕获跨特征和跨模态的依赖关系。本文提出一种新颖的双重交叉注意力融合模型,通过解决特征整合与可解释性方面的关键挑战,推进医学图像分析。该方法引入具有精细化通道与空间注意力的双向交叉注意力机制,利用多网络上下文依赖关系动态融合来自EfficientNetB4和ResNet34的特征图。通过通道与空间注意力提炼的特征能突出对准确分类至关重要的判别性模式。所提模型在COVID-19、肺结核、肺炎胸部X光图像及视网膜OCT图像上分别实现了99.75%、100%、99.93%、98.69%的AUC值,以及99.81%、100%、99.97%、96.36%的AUPR值。熵值分析及若干高不确定性样本提供了模型的可解释可视化,增强了透明度。通过结合多尺度特征提取、双向注意力与不确定性估计,我们提出的模型对医学图像分析领域具有重要影响。