The increasing complexity of Artificial Intelligence models poses challenges to interpretability, particularly in the healthcare sector. This study investigates the impact of deep learning model complexity and Explainable AI (XAI) efficacy, utilizing four ResNet architectures (ResNet-18, 34, 50, 101). Through methodical experimentation on 4,369 lung X-ray images of COVID-19-infected and healthy patients, the research evaluates models' classification performance and the relevance of corresponding XAI explanations with respect to the ground-truth disease masks. Results indicate that the increase in model complexity is associated with a decrease in classification accuracy and AUC-ROC scores (ResNet-18: 98.4%, 0.997; ResNet-101: 95.9%, 0.988). Notably, in eleven out of twelve statistical tests performed, no statistically significant differences occurred between XAI quantitative metrics - Relevance Rank Accuracy and the proposed Positive Attribution Ratio - across trained models. These results suggest that increased model complexity does not consistently lead to higher performance or relevance of explanations for models' decision-making processes.
翻译:人工智能模型日益增长的复杂性对其可解释性构成挑战,尤其在医疗健康领域。本研究通过四种ResNet架构(ResNet-18、34、50、101),探究深度学习模型复杂度与可解释人工智能(XAI)效能之间的关系。基于对4369张COVID-19感染患者与健康患者的肺部X光影像进行系统性实验,研究评估了模型的分类性能及其对应XAI解释相较于真实病灶掩膜的相关性。结果表明,模型复杂度的提升与分类准确率及AUC-ROC分数的下降相关(ResNet-18:98.4%,0.997;ResNet-101:95.9%,0.988)。值得注意的是,在全部12项统计检验中,有11项显示各训练模型在XAI量化指标——相关性排名准确度与本文提出的正向归因比率——之间未出现统计学显著差异。这些结果说明,模型复杂度的增加并不一定带来决策过程性能或解释相关性的持续提升。