Recent advances in Vision-Language-Action models (VLAs) have expanded the capabilities of embodied intelligence. However, significant challenges remain in real-time decision-making in complex 3D environments, which demand second-level responses, high-resolution perception, and tactical reasoning under dynamic conditions. To advance the field, we introduce CombatVLA, an efficient VLA model optimized for combat tasks in 3D action role-playing games(ARPGs). Specifically, our CombatVLA is a 3B model trained on video-action pairs collected by an action tracker, where the data is formatted as action-of-thought (AoT) sequences. Thereafter, CombatVLA seamlessly integrates into an action execution framework, allowing efficient inference through our truncated AoT strategy. Experimental results demonstrate that CombatVLA not only outperforms all existing models on the combat understanding benchmark but also achieves a 50-fold acceleration in game combat. Moreover, it has a higher task success rate than human players. We will open-source all resources, including the action tracker, dataset, benchmark, model weights, training code, and the implementation of the framework at https://combatvla.github.io/.
翻译:视觉-语言-动作模型(VLAs)的最新进展拓展了具身智能的能力。然而,在复杂三维环境中进行实时决策仍面临重大挑战,这需要秒级响应、高分辨率感知以及在动态条件下的战术推理。为了推动该领域发展,我们提出了CombatVLA,一种专为三维动作角色扮演游戏(ARPGs)中的战斗任务优化的高效VLA模型。具体而言,我们的CombatVLA是一个基于动作追踪器收集的视频-动作对进行训练的30亿参数模型,其中数据被格式化为动作思维(AoT)序列。随后,CombatVLA无缝集成到动作执行框架中,通过我们提出的截断AoT策略实现高效推理。实验结果表明,CombatVLA不仅在战斗理解基准测试中超越了所有现有模型,还在游戏战斗中实现了50倍的加速。此外,其任务成功率高于人类玩家。我们将在 https://combatvla.github.io/ 开源所有资源,包括动作追踪器、数据集、基准测试、模型权重、训练代码以及框架实现。