We examine the capability of Multimodal Large Language Models (MLLMs) to tackle diverse domains that extend beyond the traditional language and vision tasks these models are typically trained on. Specifically, our focus lies in areas such as Embodied AI, Games, UI Control, and Planning. To this end, we introduce a process of adapting an MLLM to a Generalist Embodied Agent (GEA). GEA is a single unified model capable of grounding itself across these varied domains through a multi-embodiment action tokenizer. GEA is trained with supervised learning on a large dataset of embodied experiences and with online RL in interactive simulators. We explore the data and algorithmic choices necessary to develop such a model. Our findings reveal the importance of training with cross-domain data and online RL for building generalist agents. The final GEA model achieves strong generalization performance to unseen tasks across diverse benchmarks compared to other generalist models and benchmark-specific approaches.
翻译:本研究探讨了多模态大语言模型(MLLMs)处理传统语言与视觉任务训练范畴之外多样化领域的能力。具体而言,我们聚焦于具身人工智能、游戏、界面控制及规划等领域。为此,我们提出了将MLLM适配为通用具身智能体(GEA)的方法流程。GEA作为单一统一模型,能够通过多模态动作分词器在上述不同领域实现自我具身化。该模型通过在大型具身经验数据集上进行监督学习,并在交互式模拟器中实施在线强化学习完成训练。我们深入探讨了开发此类模型所需的数据与算法选择。研究结果表明,跨领域数据训练与在线强化学习对于构建通用智能体至关重要。相较于其他通用模型及针对特定基准的方法,最终GEA模型在多样化基准测试的未见任务中展现出卓越的泛化性能。