Causal structures play a central role in world models that flexibly adapt to changes in the environment. While recent works motivate the benefits of discovering local causal graphs for dynamics modelling, in this work we demonstrate that accurately capturing these relationships in complex settings remains challenging for the current state-of-the-art. To remedy this shortcoming, we postulate that sparsity is a critical ingredient for the discovery of such local causal structures. To this end we present the SPARse TrANsformer World model (SPARTAN), a Transformer-based world model that learns local causal structures between entities in a scene. By applying sparsity regularisation on the attention pattern between object-factored tokens, SPARTAN identifies sparse local causal models that accurately predict future object states. Furthermore, we extend our model to capture sparse interventions with unknown targets on the dynamics of the environment. This results in a highly interpretable world model that can efficiently adapt to changes. Empirically, we evaluate SPARTAN against the current state-of-the-art in object-centric world models on observation-based environments and demonstrate that our model can learn accurate local causal graphs and achieve significantly improved few-shot adaptation to changes in the dynamics of the environment as well as robustness against removing irrelevant distractors.
翻译:因果结构在灵活适应环境变化的世界模型中占据核心地位。尽管近期研究论证了发现局部因果图对动力学建模的益处,但本研究表明,在复杂场景中准确捕捉这些关系对当前最先进方法仍具挑战性。为弥补这一不足,我们提出稀疏性是发现此类局部因果结构的关键要素。为此,我们提出稀疏Transformer世界模型(SPARTAN),这是一种基于Transformer的世界模型,能够学习场景中实体间的局部因果结构。通过对物体分解表征标记间的注意力模式施加稀疏正则化,SPARTAN可识别能够准确预测未来物体状态的稀疏局部因果模型。此外,我们扩展了模型以捕捉环境中具有未知目标的稀疏干预动力学。由此构建出高度可解释的世界模型,能够高效适应环境变化。在实证研究中,我们在基于观测的环境中将SPARTAN与当前最先进的以物体为中心的世界模型进行比较,结果表明我们的模型能够学习精确的局部因果图,在动力学变化场景中实现显著提升的少样本适应能力,并对移除无关干扰物具有更强的鲁棒性。