We propose a novel formalism for describing Structural Causal Models (SCMs) as fixed-point problems on causally ordered variables, eliminating the need for Directed Acyclic Graphs (DAGs), and establish the weakest known conditions for their unique recovery given the topological ordering (TO). Based on this, we design a two-stage causal generative model that first infers in a zero-shot manner a valid TO from observations, and then learns the generative SCM on the ordered variables. To infer TOs, we propose to amortize the learning of TOs on synthetically generated datasets by sequentially predicting the leaves of graphs seen during training. To learn SCMs, we design a transformer-based architecture that exploits a new attention mechanism enabling the modeling of causal structures, and show that this parameterization is consistent with our formalism. Finally, we conduct an extensive evaluation of each method individually, and show that when combined, our model outperforms various baselines on generated out-of-distribution problems. The code is available on \href{https://github.com/microsoft/causica/tree/main/research_experiments/fip}{Github}.
翻译:我们提出了一种新颖的形式化方法,将结构因果模型(SCMs)描述为因果有序变量上的不动点问题,从而无需依赖有向无环图(DAGs),并建立了在给定拓扑排序(TO)条件下实现其唯一恢复的最弱已知条件。基于此,我们设计了一个两阶段因果生成模型:该模型首先以零样本方式从观测数据中推断出一个有效的拓扑排序,然后在有序变量上学习生成式结构因果模型。为了推断拓扑排序,我们提出通过在训练期间顺序预测所见图的叶节点,对在合成生成数据集上的拓扑排序学习进行摊销。为了学习结构因果模型,我们设计了一种基于Transformer的架构,该架构利用一种新的注意力机制来实现因果结构的建模,并证明这种参数化方法与我们的形式化框架是一致的。最后,我们对每种方法分别进行了广泛评估,并表明当组合使用时,我们的模型在生成的分布外问题上的表现优于各种基线方法。代码发布于\href{https://github.com/microsoft/causica/tree/main/research_experiments/fip}{Github}。