Causal structures play a central role in world models that flexibly adapt to changes in the environment. While recent works motivate the benefits of discovering local causal graphs for dynamics modelling, in this work we demonstrate that accurately capturing these relationships in complex settings remains challenging for the current state-of-the-art. To remedy this shortcoming, we postulate that sparsity is a critical ingredient for the discovery of such local causal structures. To this end we present the SPARse TrANsformer World model (SPARTAN), a Transformer-based world model that learns local causal structures between entities in a scene. By applying sparsity regularisation on the attention pattern between object-factored tokens, SPARTAN identifies sparse local causal models that accurately predict future object states. Furthermore, we extend our model to capture sparse interventions with unknown targets on the dynamics of the environment. This results in a highly interpretable world model that can efficiently adapt to changes. Empirically, we evaluate SPARTAN against the current state-of-the-art in object-centric world models on observation-based environments and demonstrate that our model can learn accurate local causal graphs and achieve significantly improved few-shot adaptation to changes in the dynamics of the environment as well as robustness against removing irrelevant distractors.
翻译:因果结构在灵活适应环境变化的世界模型中占据核心地位。尽管近期研究论证了发现局部因果图对动力学建模的益处,但本工作表明,在复杂场景中准确捕捉这些关系对当前最先进方法仍具挑战性。为弥补这一不足,我们提出稀疏性是发现此类局部因果结构的关键要素。为此,我们提出稀疏Transformer世界模型(SPARTAN),这是一种基于Transformer的世界模型,能够学习场景中实体间的局部因果结构。通过对物体因子化token间的注意力模式施加稀疏正则化,SPARTAN可识别能够准确预测未来物体状态的稀疏局部因果模型。此外,我们将模型扩展至捕获环境中具有未知目标的稀疏干预动力学。这产生了一个高度可解释的世界模型,能够高效适应环境变化。实证研究中,我们在基于观测的环境中将SPARTAN与当前最先进的以物体为中心的世界模型进行对比评估,结果表明我们的模型能够学习准确的局部因果图,并在动力学环境变化的少样本适应能力上取得显著提升,同时具备更强的抗无关干扰物移除的鲁棒性。