Large language models (LLMs) have achieved success in acting as agents, which interact with environments through tools such as search engines. However, LLMs are optimized for language generation instead of tool use during training or alignment, limiting their effectiveness as agents. To resolve this problem, previous work has first collected interaction trajectories between LLMs and environments, using only trajectories that successfully finished the task to fine-tune smaller models, making fine-tuning data scarce and acquiring it both difficult and costly. Discarding failed trajectories also leads to significant wastage of data and resources and limits the possible optimization paths during fine-tuning. In this paper, we argue that unsuccessful trajectories offer valuable insights, and LLMs can learn from these trajectories through appropriate quality control and fine-tuning strategies. By simply adding a prefix or suffix that tells the model whether to generate a successful trajectory during training, we improve model performance by a large margin on mathematical reasoning, multi-hop question answering, and strategic question answering tasks. We further analyze the inference results and find that our method provides a better trade-off between valuable information and errors in unsuccessful trajectories. To our knowledge, we are the first to demonstrate the value of negative trajectories and their application in agent-tunning scenarios. Our findings offer guidance for developing better agent-tuning methods and low-resource data usage techniques.
翻译:大语言模型(LLMs)在作为智能体方面取得了成功,这类智能体通过搜索引擎等工具与环境互动。然而,LLMs在训练或对齐过程中被优化用于语言生成而非工具使用,这限制了其作为智能体的有效性。为解决这一问题,先前工作通常收集LLMs与环境的交互轨迹,仅使用成功完成任务的轨迹来微调更小的模型,导致微调数据稀缺且获取困难且成本高昂。丢弃失败轨迹还造成了数据和资源的重大浪费,并限制了微调过程中可能的优化路径。本文认为,不成功的轨迹提供了宝贵见解,通过适当的质量控制与微调策略,LLMs可以从这些轨迹中学习。只需在训练过程中添加指示模型应生成成功轨迹的前缀或后缀,我们就能在数学推理、多跳问答和策略问答任务上大幅提升模型性能。我们进一步分析推理结果,发现我们的方法在不成功轨迹中实现了宝贵信息与错误之间的更优权衡。据我们所知,我们是首个证明负面轨迹价值及其在智能体微调场景中应用的研究。我们的发现为开发更优的智能体微调方法和低资源数据利用技术提供了指导。