Causal discovery, the task of inferring causal structure from data, has the potential to uncover mechanistic insights from biological experiments, especially those involving perturbations. However, causal discovery algorithms over larger sets of variables tend to be brittle against misspecification or when data are limited. For example, single-cell transcriptomics measures thousands of genes, but the nature of their relationships is not known, and there may be as few as tens of cells per intervention setting. To mitigate these challenges, we propose a foundation model-inspired approach: a supervised model trained on large-scale, synthetic data to predict causal graphs from summary statistics -- like the outputs of classical causal discovery algorithms run over subsets of variables and other statistical hints like inverse covariance. Our approach is enabled by the observation that typical errors in the outputs of a discovery algorithm remain comparable across datasets. Theoretically, we show that the model architecture is well-specified, in the sense that it can recover a causal graph consistent with graphs over subsets. Empirically, we train the model to be robust to misspecification and distribution shift using diverse datasets. Experiments on biological and synthetic data confirm that this model generalizes well beyond its training set, runs on graphs with hundreds of variables in seconds, and can be easily adapted to different underlying data assumptions.
翻译:因果发现是从数据中推断因果结构的任务,具有从生物学实验(特别是涉及干预的实验)中揭示机制性见解的潜力。然而,针对较大变量集的因果发现算法往往在模型设定错误或数据有限时表现脆弱。例如,单细胞转录组学可测量数千个基因,但其相互关系本质未知,且每个干预条件下的细胞数量可能仅有数十个。为缓解这些挑战,我们提出一种受基础模型启发的监督学习方法:该模型在大规模合成数据上训练,能够根据汇总统计量(如在变量子集上运行经典因果发现算法得到的输出,以及逆协方差等其他统计线索)预测因果图。该方法基于以下观察得以实现:发现算法输出中的典型误差在不同数据集间保持可比性。理论上,我们证明该模型架构具有良好设定性,即能够恢复与子图一致的因果图。实证方面,我们通过多样化数据集训练模型,使其对设定错误和分布偏移具有鲁棒性。在生物与合成数据上的实验表明,该模型能良好泛化至训练集之外,可在数秒内处理包含数百个变量的图结构,并能轻松适配不同的底层数据假设。