Hazard and survival functions are natural, interpretable targets in time-to-event prediction, but their inherent non-additivity fundamentally limits standard additive explanation methods. We introduce Survival Functional Decomposition (SurvFD), a principled approach for analyzing feature interactions in machine learning survival models. By decomposing higher-order effects into time-dependent and time-independent components, SurvFD offers a previously unrecognized perspective on survival explanations, explicitly characterizing when and why additive explanations fail. Building on this theoretical decomposition, we propose SurvSHAP-IQ, which extends Shapley interactions to time-indexed functions, providing a practical estimator for higher-order, time-dependent interactions. Together, SurvFD and SurvSHAP-IQ establish an interaction- and time-aware interpretability approach for survival modeling, with broad applicability across time-to-event prediction tasks.
翻译:危险函数与生存函数作为时间-事件预测中自然且可解释的目标,其固有的非可加性从根本上限制了标准可加性解释方法的应用。本文提出生存功能分解(SurvFD),一种用于分析机器学习生存模型中特征交互作用的原理性方法。通过将高阶效应分解为时间依赖与时间无关的组成部分,SurvFD为生存解释提供了前所未有的视角,明确刻画了可加性解释何时及为何失效。基于此理论分解,我们进一步提出SurvSHAP-IQ方法,将Shapley交互作用扩展至时间索引函数,为高阶时间依赖交互作用提供了实用估计量。SurvFD与SurvSHAP-IQ共同构建了面向生存建模的交互感知与时间感知可解释性框架,在各类时间-事件预测任务中具有广泛适用性。