Deep neural networks (DNNs) have demonstrated remarkable empirical performance in large-scale supervised learning problems, particularly in scenarios where both the sample size $n$ and the dimension of covariates $p$ are large. This study delves into the application of DNNs across a wide spectrum of intricate causal inference tasks, where direct estimation falls short and necessitates multi-stage learning. Examples include estimating the conditional average treatment effect and dynamic treatment effect. In this framework, DNNs are constructed sequentially, with subsequent stages building upon preceding ones. To mitigate the impact of estimation errors from early stages on subsequent ones, we integrate DNNs in a doubly robust manner. In contrast to previous research, our study offers theoretical assurances regarding the effectiveness of DNNs in settings where the dimensionality $p$ expands with the sample size. These findings are significant independently and extend to degenerate single-stage learning problems.
翻译:深度神经网络(DNNs)在大规模监督学习问题中展现出卓越的实证性能,尤其是在样本量 $n$ 与协变量维度 $p$ 均较大的场景下。本研究深入探讨了DNNs在各类复杂因果推断任务中的应用,这些任务中直接估计难以实现,必须采用多阶段学习策略,例如估计条件平均处理效应与动态处理效应。在此框架下,DNNs被顺序构建,后续阶段基于前一阶段的输出进行学习。为减轻早期阶段估计误差对后续阶段的影响,我们以双重稳健的方式整合DNNs。与以往研究不同,本研究为DNNs在维度 $p$ 随样本量增长而扩展的设置下的有效性提供了理论保证。这些结果本身具有重要意义,并可推广至退化的单阶段学习问题。