Causal discovery from observational data is a fundamental task in artificial intelligence, with far-reaching implications for decision-making, predictions, and interventions. Despite significant advances, existing methods can be broadly categorized as constraint-based or score-based approaches. Constraint-based methods offer rigorous causal discovery but are often hindered by small sample sizes, while score-based methods provide flexible optimization but typically forgo explicit conditional independence testing. This work explores a third avenue: developing differentiable $d$-separation scores, obtained through a percolation theory using soft logic. This enables the implementation of a new type of causal discovery method: gradient-based optimization of conditional independence constraints. Empirical evaluations demonstrate the robust performance of our approach in low-sample regimes, surpassing traditional constraint-based and score-based baselines on a real-world dataset. Code and data of the proposed method are publicly available at https://github$.$com/PurdueMINDS/DAGPA.
翻译:从观测数据中发现因果关系是人工智能领域的一项基础任务,对决策、预测和干预具有深远影响。尽管已取得显著进展,现有方法大致可分为基于约束的方法和基于评分的方法。基于约束的方法提供严谨的因果发现,但常受限于小样本量;而基于评分的方法提供灵活的优化,但通常放弃了显式的条件独立性检验。本研究探索了第三条路径:通过使用软逻辑的渗流理论,开发可微分的 $d$-分离评分。这使得实现一种新型的因果发现方法成为可能:基于梯度的条件独立性约束优化。实证评估表明,我们的方法在低样本量情况下具有稳健的性能,在真实世界数据集上超越了传统的基于约束和基于评分的基线方法。所提方法的代码和数据已在 https://github.com/PurdueMINDS/DAGPA 公开。