The interpretability of deep learning is crucial for evaluating the reliability of medical imaging models and reducing the risks of inaccurate patient recommendations. This study addresses the "human out of the loop" and "trustworthiness" issues in medical image analysis by integrating medical professionals into the interpretability process. We propose a disease-weighted attention map refinement network (DWARF) that leverages expert feedback to enhance model relevance and accuracy. Our method employs cyclic training to iteratively improve diagnostic performance, generating precise and interpretable feature maps. Experimental results demonstrate significant improvements in interpretability and diagnostic accuracy across multiple medical imaging datasets. This approach fosters effective collaboration between AI systems and healthcare professionals, ultimately aiming to improve patient outcomes
翻译:深度学习模型的可解释性对于评估医学影像模型的可靠性、降低患者误诊风险至关重要。本研究通过将医学专家纳入可解释性过程,解决了医学图像分析中“人类脱离决策循环”和“可信度”的问题。我们提出了一种疾病加权的注意力图优化网络(DWARF),该网络利用专家反馈来增强模型的相关性与准确性。我们的方法采用循环训练策略迭代提升诊断性能,生成精确且可解释的特征图。实验结果表明,该方法在多个医学影像数据集上显著提升了模型可解释性与诊断准确率。这一方法促进了人工智能系统与医疗专业人员之间的有效协作,最终旨在改善患者诊疗结果。