Given the sheer volume of surgical procedures and the significant rate of postoperative fatalities, assessing and managing surgical complications has become a critical public health concern. Existing artificial intelligence (AI) tools for risk surveillance and diagnosis often lack adequate interpretability, fairness, and reproducibility. To address this, we proposed an Explainable AI (XAI) framework designed to answer five critical questions: why, why not, how, what if, and what else, with the goal of enhancing the explainability and transparency of AI models. We incorporated various techniques such as Local Interpretable Model-agnostic Explanations (LIME), SHapley Additive exPlanations (SHAP), counterfactual explanations, model cards, an interactive feature manipulation interface, and the identification of similar patients to address these questions. We showcased an XAI interface prototype that adheres to this framework for predicting major postoperative complications. This initial implementation has provided valuable insights into the vast explanatory potential of our XAI framework and represents an initial step towards its clinical adoption.
翻译:鉴于外科手术的巨大数量以及术后死亡率的显著水平,评估和管理手术并发症已成为一个关键的公共卫生问题。现有用于风险监测和诊断的人工智能工具通常缺乏足够的可解释性、公平性和可重复性。为解决这一问题,我们提出了一种可解释人工智能框架,旨在回答五个关键问题:为什么、为什么不、如何、如果会怎样以及其他可能情况,目标是增强人工智能模型的可解释性和透明度。我们整合了多种技术,包括局部可解释模型无关解释、SHapley加法解释、反事实解释、模型卡片、交互式特征操作界面以及相似患者识别,以应对这些问题。我们展示了一个遵循该框架用于预测主要术后并发症的可解释人工智能界面原型。这一初步实施为我们提出的可解释人工智能框架的巨大解释潜力提供了宝贵见解,并标志着其向临床应用的初步迈进。