Improving the reasoning capabilities of embodied agents is crucial for robots to complete complex human instructions in long-view manipulation tasks successfully. Despite the success of large language models and vision language models based on Supervised Fine-Tuning (SFT) in planning tasks, they continue facing challenges in performing long-horizon manipulation tasks in complex real-world environments, owing to their restricted common sense and reasoning capabilities. Considering that aligning general-purpose vision language models to robotic planning tasks via supervised fine-tuning suffers from poor generalization and insufficient physical understanding, we propose RoboGPT-R1, a two-stage fine-tuning framework for embodied planning. In this framework, supervised training acquires foundational knowledge through expert sequences, followed by RL to address the model's shortcomings in visual-spatial understanding and reasoning. To achieve physical understanding and action sequence consistency in multi-step reasoning tasks, we design a rule-based reward function that simultaneously considers long-horizon performance and action constraint in the environment. The reasoning model, trained on Qwen2.5-VL-3B, significantly outperforms the larger-scale model, GPT-4o-mini, by 21.33% and surpasses other work trained on Qwen2.5-VL-7B by 20.33% on the EmbodiedBench benchmark.
翻译:提升具身智能体的推理能力对于机器人成功完成长视野操作任务中的复杂人类指令至关重要。尽管基于监督微调的大型语言模型和视觉语言模型在规划任务中取得了成功,但由于其常识和推理能力有限,在复杂现实环境中执行长视野操作任务时仍面临挑战。考虑到通过监督微调将通用视觉语言模型对齐到机器人规划任务存在泛化能力差和物理理解不足的问题,我们提出了RoboGPT-R1——一个用于具身规划的两阶段微调框架。在该框架中,监督训练通过专家序列获取基础知识,随后通过强化学习弥补模型在视觉空间理解和推理方面的不足。为实现多步推理任务中的物理理解与动作序列一致性,我们设计了一种基于规则的奖励函数,该函数同时考虑了长视野性能与环境中的动作约束。在Qwen2.5-VL-3B上训练的推理模型,在EmbodiedBench基准测试中显著优于更大规模的GPT-4o-mini模型达21.33%,并超越其他基于Qwen2.5-VL-7B训练的工作20.33%。