Many public policies and medical interventions involve dynamics in their treatment assignments, where treatments are sequentially assigned to the same individuals across multiple stages, and the effect of treatment at each stage is usually heterogeneous with respect to the history of prior treatments and associated characteristics. We study statistical learning of optimal dynamic treatment regimes (DTRs) that guide the optimal treatment assignment for each individual at each stage based on the individual's history. We propose a step-wise doubly-robust approach to learn the optimal DTR using observational data under the assumption of sequential ignorability. The approach solves the sequential treatment assignment problem through backward induction, where, at each step, we combine estimators of propensity scores and action-value functions (Q-functions) to construct augmented inverse probability weighting estimators of values of policies for each stage. The approach consistently estimates the optimal DTR if either a propensity score or Q-function for each stage is consistently estimated. Furthermore, the resulting DTR can achieve the optimal convergence rate $n^{-1/2}$ of regret under mild conditions on the convergence rate for estimators of the nuisance parameters.
翻译:许多公共政策和医疗干预措施涉及治疗分配中的动态过程,其中治疗会跨多个阶段依次分配给同一批个体,且每个阶段的治疗效果通常因先前的治疗历史及相关特征而异。我们研究最优动态治疗方案(DTRs)的统计学习,该方法根据每个个体的历史信息指导其在各阶段的最优治疗分配。我们提出一种逐步双重鲁棒方法,在假设序列可忽略性的条件下利用观测数据学习最优DTR。该方法通过反向归纳解决序列治疗分配问题:在每一步中,结合倾向性得分估计量和动作值函数(Q函数)估计量,构建每一阶段策略价值的增广逆概率加权估计量。若每个阶段的倾向性得分或Q函数中有一者被一致估计,则该方法的估计结果将一致收敛于最优DTR。此外,在干扰参数估计量收敛速度的温和条件下,所得到的DTR可实现最优收敛速率$n^{-1/2}$的遗憾值。