The application of deep learning models in medical diagnosis has showcased considerable efficacy in recent years. Nevertheless, a notable limitation involves the inherent lack of explainability during decision-making processes. This study addresses such a constraint, by enhancing the interpretability robustness. The primary focus is directed towards refining the explanations generated by the LIME Library and LIME image explainer. This is achieved throuhg post-processing mechanisms, based on scenario-specific rules. Multiple experiments have been conducted using publicly accessible datasets related to brain tumor detection. Our proposed post-heuristic approach demonstrates significant advancements, yielding more robust and concrete results, in the context of medical diagnosis.
翻译:近年来,深度学习模型在医学诊断中的应用展现了显著成效。然而,一个关键局限在于其在决策过程中固有的可解释性不足。本研究针对这一限制,通过提升解释鲁棒性加以解决。主要聚焦于优化LIME库及LIME图像解释器生成的解释结果。这一目标通过基于场景特定规则的后处理机制得以实现。我们利用公开的脑肿瘤检测数据集开展了多项实验。所提出的后启发式方法在医学诊断背景下取得了显著进展,生成了更为鲁棒且具说服力的结果。