Internal modelling of the world -- predicting transitions between previous states $X$ and next states $Y$ under actions $Z$ -- is essential to reasoning and planning for LLMs and VLMs. Learning such models typically requires costly action-labelled trajectories. We propose SWIRL, a self-improvement framework that learns from state-only sequences by treating actions as a latent variable and alternating between Forward World Modelling (FWM) $P_θ(Y|X,Z)$ and an Inverse Dynamics Modelling (IDM) $Q_φ(Z|X,Y)$. SWIRL iterates two phases: (1) Variational Information Maximisation, which updates the FWM to generate next states that maximise conditional mutual information with latent actions given prior states, encouraging identifiable consistency; and (2) ELBO Maximisation, which updates the IDM to explain observed transitions, effectively performing coordinate ascent. Both models are trained with reinforcement learning (specifically, GRPO) with the opposite frozen model's log-probability as a reward signal. We provide theoretical learnability guarantees for both updates, and evaluate SWIRL on LLMs and VLMs across multiple environments: single-turn and multi-turn open-world visual dynamics and synthetic textual environments for physics, web, and tool calling. SWIRL achieves gains of 16% on AURORABench, 28% on ByteMorph, 16% on WorldPredictionBench, and 14% on StableToolBench.
翻译:内部世界建模——预测先前状态$X$与后续状态$Y$在行动$Z$作用下的转移过程——对于大型语言模型(LLMs)和视觉语言模型(VLMs)的推理与规划至关重要。学习此类模型通常需要成本高昂的行动标注轨迹。我们提出SWIRL,一种自改进框架,它通过将行动视为潜在变量,并交替进行前向世界建模(FWM)$P_θ(Y|X,Z)$与逆动力学建模(IDM)$Q_φ(Z|X,Y)$,从而仅从状态序列中学习。SWIRL迭代两个阶段:(1)变分信息最大化:更新FWM以生成在给定先前状态下,与潜在行动的条件互信息最大化的后续状态,从而促进可识别的连贯性;(2)证据下界(ELBO)最大化:更新IDM以解释观察到的状态转移,有效执行坐标上升法。两个模型均通过强化学习(具体为GRPO)进行训练,并以另一冻结模型的对数概率作为奖励信号。我们为两种更新提供了理论上的可学习性保证,并在多个环境中对LLMs和VLMs评估了SWIRL:单轮与多轮开放世界视觉动态环境,以及针对物理、网络和工具调用的合成文本环境。SWIRL在AURORABench上实现了16%的提升,在ByteMorph上提升了28%,在WorldPredictionBench上提升了16%,在StableToolBench上提升了14%。