This paper introduces the concept of Language-Guided World Models (LWMs) -- probabilistic models that can simulate environments by reading texts. Agents equipped with these models provide humans with more extensive and efficient control, allowing them to simultaneously alter agent behaviors in multiple tasks via natural verbal communication. In this work, we take initial steps in developing robust LWMs that can generalize to compositionally novel language descriptions. We design a challenging world modeling benchmark based on the game of MESSENGER (Hanjie et al., 2021), featuring evaluation settings that require varying degrees of compositional generalization. Our experiments reveal the lack of generalizability of the state-of-the-art Transformer model, as it offers marginal improvements in simulation quality over a no-text baseline. We devise a more robust model by fusing the Transformer with the EMMA attention mechanism (Hanjie et al., 2021). Our model substantially outperforms the Transformer and approaches the performance of a model with an oracle semantic parsing and grounding capability. To demonstrate the practicality of this model in improving AI safety and transparency, we simulate a scenario in which the model enables an agent to present plans to a human before execution, and to revise plans based on their language feedback.
翻译:本文提出了语言引导世界模型(LWMs)的概念——一种能够通过阅读文本来模拟环境的概率模型。配备此类模型的智能体为人类提供了更广泛且高效的控制手段,使其能够通过自然语言交流同时改变智能体在多项任务中的行为。在本研究中,我们迈出了开发稳健LWMs的第一步,该模型能够泛化至组合新颖的语言描述。我们基于MESSENGER游戏(Hanjie等人,2021)设计了一个具有挑战性的世界建模基准,其评估设置需要不同程度的组合泛化能力。实验表明,当前最先进的Transformer模型缺乏泛化能力,其在模拟质量上仅较无文本基线有边际改进。我们通过将Transformer与EMMA注意力机制(Hanjie等人,2021)融合,设计出更具鲁棒性的模型。该模型显著优于Transformer,并接近具备预言机语义解析与 grounding 能力的模型性能。为论证该模型在提升AI安全性与透明度方面的实用性,我们模拟了一个场景:该模型使智能体能够在执行前向人类呈现计划,并根据人类语言反馈修订计划。