Causal Abstraction (CA) theory provides a principled framework for relating causal models that describe the same system at different levels of granularity while ensuring interventional consistency between them. Recently, several approaches for learning CAs have been proposed, but all assume fixed and well-specified exogenous distributions, making them vulnerable to environmental shifts and misspecification. In this work, we address these limitations by introducing the first class of distributionally robust CAs and their associated learning algorithms. The latter cast robust causal abstraction learning as a constrained min-max optimization problem with Wasserstein ambiguity sets. We provide theoretical results, for both empirical and Gaussian environments, leading to principled selection of the level of robustness via the radius of these sets. Furthermore, we present empirical evidence across different problems and CA learning methods, demonstrating our framework's robustness not only to environmental shifts but also to structural model and intervention mapping misspecification.
翻译:因果抽象理论为描述同一系统在不同粒度层次上的因果模型提供了原则性框架,确保它们之间的干预一致性。近期已有多种学习因果抽象的方法被提出,但这些方法均假设外生分布固定且设定准确,使其易受环境变化和设定错误的影响。本研究通过引入首类分布鲁棒的因果抽象及其关联学习算法,解决了这些局限性。该算法将鲁棒因果抽象学习建模为具有Wasserstein模糊集的约束极小极大优化问题。我们针对经验分布和高斯分布环境提供了理论结果,从而可通过这些模糊集的半径实现鲁棒性水平的原理性选择。此外,我们在不同问题与因果抽象学习方法中提供了实证证据,证明我们的框架不仅对环境变化具有鲁棒性,对结构模型和干预映射的设定错误同样具有鲁棒性。