Recent advancements in large language models (LLMs) have expanded their capabilities beyond traditional text-based tasks to multimodal domains, integrating visual, auditory, and textual data. While multimodal LLMs have been extensively explored for high-level planning in domains like robotics and games, their potential as low-level controllers remains largely untapped. In this paper, we introduce a novel benchmark aimed at testing the emergent capabilities of multimodal LLMs as low-level policies in Atari games. Unlike traditional reinforcement learning (RL) methods that require training for each new environment and reward function specification, these LLMs utilize pre-existing multimodal knowledge to directly engage with game environments. Our study assesses the performances of multiple multimodal LLMs against traditional RL agents, human players, and random agents, focusing on their ability to understand and interact with complex visual scenes and formulate strategic responses. Our results show that these multimodal LLMs are not yet capable of being zero-shot low-level policies. Furthermore, we see that this is, in part, due to their visual and spatial reasoning. Additional results and videos are available on our project webpage: https://dev1nw.github.io/atari-gpt/.
翻译:近年来,大语言模型(LLMs)的能力已超越传统的文本任务,扩展到多模态领域,能够整合视觉、听觉和文本数据。尽管多模态大语言模型在机器人学和游戏等领域的高级规划任务中已得到广泛探索,但其作为低级控制器的潜力在很大程度上尚未被开发。本文提出了一种新颖的基准测试,旨在评估多模态大语言模型在Atari游戏中作为低级策略所表现出的涌现能力。与需要针对每个新环境进行训练并指定奖励函数的传统强化学习(RL)方法不同,这些大语言模型利用预先习得的多模态知识直接与游戏环境交互。我们的研究评估了多种多模态大语言模型相对于传统强化学习智能体、人类玩家和随机智能体的性能,重点关注它们理解和交互复杂视觉场景、并制定策略响应的能力。结果表明,这些多模态大语言模型尚不具备成为零样本低级策略的能力。进一步分析发现,这在一定程度上归因于其视觉与空间推理能力的不足。更多结果和视频请参见我们的项目网页:https://dev1nw.github.io/atari-gpt/。