In the context of medical decision making, counterfactual prediction enables clinicians to predict treatment outcomes of interest under alternative courses of therapeutic actions given observed patient history. Prior machine learning approaches for counterfactual predictions under time-varying treatments focus on static time-varying treatment regimes where treatments do not depend on previous covariate history. In this work, we present G-Transformer, a Transformer-based framework supporting g-computation for counterfactual prediction under dynamic and time-varying treatment strategies. G-Transfomer captures complex, long-range dependencies in time-varying covariates using a Transformer architecture. G-Transformer estimates the conditional distribution of relevant covariates given covariate and treatment history at each time point using an encoder architecture, then produces Monte Carlo estimates of counterfactual outcomes by simulating forward patient trajectories under treatment strategies of interest. We evaluate G-Transformer extensively using two simulated longitudinal datasets from mechanistic models, and a real-world sepsis ICU dataset from MIMIC-IV. G-Transformer outperforms both classical and state-of-the-art counterfactual prediction models in these settings. To the best of our knowledge, this is the first Transformer-based architecture for counterfactual outcome prediction under dynamic and time-varying treatment strategies.
翻译:在医疗决策背景下,反事实预测使临床医生能够根据观察到的患者历史,预测不同治疗方案下目标治疗结果。先前针对时变治疗的反事实预测机器学习方法主要关注静态时变治疗方案,其治疗选择不依赖于先前的协变量历史。本研究提出G-Transformer——一个基于Transformer的框架,支持动态时变治疗策略下反事实预测的g-计算。G-Transformer通过Transformer架构捕捉时变协变量中复杂的长期依赖关系。该模型使用编码器架构估计每个时间点给定协变量和治疗历史的相关协变量条件分布,随后通过模拟目标治疗策略下患者前向轨迹生成反事实结果的蒙特卡洛估计。我们使用两个基于机制模型的模拟纵向数据集,以及来自MIMIC-IV的真实世界脓毒症ICU数据集对G-Transformer进行了全面评估。在这些场景中,G-Transformer的表现均优于经典及最先进的反事实预测模型。据我们所知,这是首个针对动态时变治疗策略下反事实结果预测的基于Transformer的架构。