AI agents are increasingly used to solve real-world tasks by reasoning over multi-turn user interactions and invoking external tools. However, applying reinforcement learning to such settings remains difficult: realistic objectives often lack verifiable rewards and instead emphasize open-ended behaviors; moreover, RL for multi-turn, multi-step agentic tool use is still underexplored; and building and maintaining executable tool environments is costly, limiting scale and coverage. We propose CM2, an RL framework that replaces verifiable outcome rewards with checklist rewards. CM2 decomposes each turn's intended behavior into fine-grained binary criteria with explicit evidence grounding and structured metadata, turning open-ended judging into more stable classification-style decisions. To balance stability and informativeness, our method adopts a strategy of sparse reward assignment but dense evaluation criteria. Training is performed in a scalable LLM-simulated tool environment, avoiding heavy engineering for large tool sets. Experiments show that CM2 consistently improves over supervised fine-tuning. Starting from an 8B Base model and training on an 8k-example RL dataset, CM2 improves over the SFT counterpart by 8 points on tau^-Bench, by 10 points on BFCL-V4, and by 12 points on ToolSandbox. The results match or even outperform similarly sized open-source baselines, including the judging model. CM2 thus provides a scalable recipe for optimizing multi-turn, multi-step tool-using agents without relying on verifiable rewards. Code provided by the open-source community: https://github.com/namezhenzhang/CM2-RLCR-Tool-Agent.
翻译:人工智能智能体正日益被用于解决现实世界任务,通过在多轮用户交互中进行推理并调用外部工具。然而,在此类场景中应用强化学习仍面临困难:现实目标通常缺乏可验证的奖励,而更强调开放式行为;此外,针对多轮、多步智能体工具使用的强化学习研究仍显不足;构建和维护可执行的工具环境成本高昂,限制了规模和覆盖范围。我们提出CM2,一种用清单奖励替代可验证结果奖励的强化学习框架。CM2将每轮预期行为分解为细粒度的二元标准,并辅以明确的证据基础和结构化元数据,从而将开放式评判转化为更稳定的分类式决策。为平衡稳定性和信息量,我们的方法采用稀疏奖励分配但密集评估标准的策略。训练在可扩展的大语言模型模拟工具环境中进行,避免了为大型工具集进行繁重工程开发。实验表明,CM2持续优于监督微调。从一个80亿参数的基础模型出发,在包含8000个示例的强化学习数据集上训练后,CM2在tau^-Bench上比监督微调对照模型提升8个百分点,在BFCL-V4上提升10个百分点,在ToolSandbox上提升12个百分点。其结果达到甚至优于包括评判模型在内的同规模开源基线。因此,CM2为优化多轮、多步工具使用智能体提供了一种不依赖可验证奖励的可扩展方案。开源社区提供的代码:https://github.com/namezhenzhang/CM2-RLCR-Tool-Agent。