Model interpretability is crucial for establishing AI safety and clinician trust in medical applications for example, in survival modelling with competing risks. Recent deep learning models have attained very good predictive performance but their limited transparency, being black-box models, hinders their integration into clinical practice. To address this gap, we propose an intrinsically interpretable survival model called CRISPNAM-FG. Leveraging the structure of Neural Additive Models (NAMs) with separate projection vectors for each risk, our approach predicts the Cumulative Incidence Function using the Fine-Gray formulation, achieving high predictive power with intrinsically transparent and auditable predictions. We validated the model on several benchmark datasets and applied our model to predict future foot complications in diabetic patients across 29 Ontario hospitals (2016-2023). Our method achieves competitive performance compared to other deep survival models while providing transparency through shape functions and feature importance plots.
翻译:模型可解释性对于在医疗应用中建立AI安全性和临床医生信任至关重要,例如在竞争风险下的生存建模中。近期的深度学习模型已取得优异的预测性能,但其作为黑盒模型的有限透明度阻碍了其在临床实践中的整合。为弥补这一不足,我们提出了一种本质可解释的生存模型CRISPNAM-FG。该方法利用神经加法模型(NAMs)的结构,为每种风险设置独立的投影向量,通过Fine-Gray公式预测累积发生率函数,在实现高预测能力的同时提供本质透明且可审计的预测结果。我们在多个基准数据集上验证了该模型,并将其应用于预测安大略省29家医院(2016-2023年)糖尿病患者未来足部并发症的风险。与其他深度生存模型相比,我们的方法在保持竞争力的预测性能的同时,通过形状函数和特征重要性图表提供了透明度。