Uncertainty quantification is central to many applications of causal machine learning, yet principled Bayesian inference for causal effects remains challenging. Standard Bayesian approaches typically require specifying a probabilistic model for the data-generating process, including high-dimensional nuisance components such as propensity scores and outcome regressions. Standard posteriors are thus vulnerable to strong modeling choices, including complex prior elicitation. In this paper, we propose a generalized Bayesian framework for causal inference. Our framework avoids explicit likelihood modeling; instead, we place priors directly on the causal estimands and update these using an identification-driven loss function, which yields generalized posteriors for causal effects. As a result, our framework turns existing loss-based causal estimators into estimators with full uncertainty quantification. Our framework is flexible and applicable to a broad range of causal estimands (e.g., ATE, CATE). Further, our framework can be applied on top of state-of-the-art causal machine learning pipelines (e.g., Neyman-orthogonal meta-learners). For Neyman-orthogonal losses, we show that the generalized posteriors converge to their oracle counterparts and remain robust to first-stage nuisance estimation error. With calibration, we thus obtain valid frequentist uncertainty even when nuisance estimators converge at slower-than-parametric rates. Empirically, we demonstrate that our proposed framework offers causal effect estimation with calibrated uncertainty across several causal inference settings. To the best of our knowledge, this is the first flexible framework for constructing generalized Bayesian posteriors for causal machine learning.
翻译:不确定性量化是因果机器学习诸多应用的核心,然而针对因果效应的严谨贝叶斯推断仍面临挑战。标准贝叶斯方法通常需要为数据生成过程指定概率模型,其中包含高维干扰成分(如倾向得分与结果回归)。因此标准后验分布易受强建模选择(包括复杂的先验设定)的影响。本文提出一种用于因果推断的广义贝叶斯框架。该框架避免显式的似然建模,而是将先验直接置于因果估计量上,并通过识别驱动的损失函数进行更新,从而得到因果效应的广义后验分布。由此,本框架可将现有基于损失的因果估计量转化为具备完整不确定性量化的估计量。该框架具有灵活性,适用于广泛的因果估计量(如ATE、CATE)。此外,本框架可应用于最先进的因果机器学习流程之上(例如Neyman正交元学习器)。对于Neyman正交损失,我们证明广义后验会收敛至其理论对应分布,并对第一阶段干扰估计误差保持稳健性。经过校准后,即使在干扰估计量以慢于参数速率收敛时,我们仍能获得有效的频率学派不确定性。通过实证研究,我们证明所提框架能在多种因果推断场景下提供具有校准不确定性的因果效应估计。据我们所知,这是首个为因果机器学习构建广义贝叶斯后验的灵活框架。