Large vision-language models (VLMs) fine-tuned on specialized visual instruction-following data have exhibited impressive language reasoning capabilities across various scenarios. However, this fine-tuning paradigm may not be able to efficiently learn optimal decision-making agents in multi-step goal-directed tasks from interactive environments. To address this challenge, we propose an algorithmic framework that fine-tunes VLMs with reinforcement learning (RL). Specifically, our framework provides a task description and then prompts the VLM to generate chain-of-thought (CoT) reasoning, enabling the VLM to efficiently explore intermediate reasoning steps that lead to the final text-based action. Next, the open-ended text output is parsed into an executable action to interact with the environment to obtain goal-directed task rewards. Finally, our framework uses these task rewards to fine-tune the entire VLM with RL. Empirically, we demonstrate that our proposed framework enhances the decision-making capabilities of VLM agents across various tasks, enabling 7b models to outperform commercial models such as GPT4-V or Gemini. Furthermore, we find that CoT reasoning is a crucial component for performance improvement, as removing the CoT reasoning results in a significant decrease in the overall performance of our method.
翻译:摘要:在专业视觉指令遵循数据上微调的大规模视觉语言模型(VLM)已在多种场景下展现出卓越的语言推理能力。然而,这种微调范式可能无法从交互环境中高效学习多步骤目标导向任务中的最优决策智能体。为解决这一挑战,我们提出了一种结合强化学习(RL)微调VLM的算法框架。具体而言,该框架提供任务描述后,引导VLM生成思维链(CoT)推理,使其能够高效探索通向最终文本动作的中间推理步骤。随后,将开放式文本输出解析为可执行动作,与环境交互以获取目标导向的任务奖励。最终,框架利用这些任务奖励通过强化学习对整个VLM进行微调。实验表明,所提框架增强了VLM智能体在不同任务中的决策能力,使7B参数模型的表现超越GPT4-V或Gemini等商业模型。此外,我们发现CoT推理是性能提升的关键组件——移除CoT推理将导致方法整体性能显著下降。