To effectively study complex causal systems, it is often useful to construct representations that simplify parts of the system by discarding irrelevant details while preserving key features. The Information Bottleneck (IB) method is a widely used approach in representation learning that compresses random variables while retaining information about a target variable. Traditional methods like IB are purely statistical and ignore underlying causal structures, making them ill-suited for causal tasks. We propose the Causal Information Bottleneck (CIB), a causal extension of the IB, which compresses a set of chosen variables while maintaining causal control over a target variable. This method produces representations which are causally interpretable, and which can be used when reasoning about interventions. We present experimental results demonstrating that the learned representations accurately capture causality as intended.
翻译:为有效研究复杂因果系统,通常需要构建能够简化系统部分的表示方法,即在保留关键特征的同时舍弃无关细节。信息瓶颈(IB)方法是表示学习中广泛采用的一种途径,通过对随机变量进行压缩,同时保留与目标变量相关的信息。传统方法如IB仅基于统计特性而忽略潜在的因果结构,因此不适用于因果任务。我们提出因果信息瓶颈(CIB),作为IB的因果扩展,该方法在压缩选定变量集合的同时,保持对目标变量的因果控制。此方法生成的表示具有因果可解释性,并可用于干预推理。我们通过实验结果证明,学习到的表示能够准确捕获预期的因果关系。