Conditional independence tests (CIT) are widely used for causal discovery and feature selection. Even with false discovery rate (FDR) control procedures, they often fail to provide frequentist guarantees in practice. We highlight two common failure modes: (i) in small samples, asymptotic guarantees for many CITs can be inaccurate and even correctly specified models fail to estimate the noise levels and control the error, and (ii) when sample sizes are large but models are misspecified, unaccounted dependencies skew the test's behavior and fail to return uniform p-values under the null. We propose Empirically Calibrated Conditional Independence Tests (ECCIT), a method that measures and corrects for miscalibration. For a chosen base CIT (e.g., GCM, HRT), ECCIT optimizes an adversary that selects features and response functions to maximize a miscalibration metric. ECCIT then fits a monotone calibration map that adjusts the base-test p-values in proportion to the observed miscalibration. Across empirical benchmarks on synthetic and real data, ECCIT achieves valid FDR with higher power than existing calibration strategies while remaining test agnostic.
翻译:条件独立性检验(CIT)被广泛用于因果发现和特征选择。即使采用错误发现率(FDR)控制程序,它们在实践中也常常无法提供频率学保证。我们重点指出了两种常见的失效模式:(i)在小样本情况下,许多CIT的渐近保证可能不准确,即使模型设定正确,也无法准确估计噪声水平并控制误差;(ii)当样本量较大但模型设定错误时,未考虑的依赖性会扭曲检验的行为,导致在原假设下无法返回均匀的p值。我们提出了经验校准的条件独立性检验(ECCIT),一种测量并校正错误校准的方法。对于选定的基础CIT(例如GCM、HRT),ECCIT优化一个对抗器,该对抗器选择特征和响应函数以最大化错误校准度量。随后,ECCIT拟合一个单调校准映射,根据观测到的错误校准程度按比例调整基础检验的p值。在合成与真实数据的实证基准测试中,ECCIT在保持检验无关性的同时,以比现有校准策略更高的功效实现了有效的FDR控制。