Large language model (LLM) agents have demonstrated impressive capabilities in utilizing external tools and knowledge to boost accuracy and reduce hallucinations. However, developing prompting techniques that enable LLM agents to effectively use these tools and knowledge remains a heuristic and labor-intensive task. Here, we introduce AvaTaR, a novel and automated framework that optimizes an LLM agent to effectively leverage provided tools, improving performance on a given task. During optimization, we design a comparator module to iteratively deliver insightful and comprehensive prompts to the LLM agent by contrastively reasoning between positive and negative examples sampled from training data. We demonstrate AvaTaR on four complex multimodal retrieval datasets featuring textual, visual, and relational information, and three general question-answering (QA) datasets. We find AvaTaR consistently outperforms state-of-the-art approaches across all seven tasks, exhibiting strong generalization ability when applied to novel cases and achieving an average relative improvement of 14% on the Hit@1 metric for the retrieval datasets and 13% for the QA datasets. Code and dataset are available at https://github.com/zou-group/avatar.
翻译:大型语言模型(LLM)智能体在利用外部工具与知识提升准确率、减少幻觉方面已展现出卓越能力。然而,开发能够使LLM智能体有效运用这些工具与知识的提示技术,目前仍依赖启发式且劳动密集的手动设计。本文提出AvaTaR——一种新颖的自动化框架,通过优化LLM智能体以高效利用给定工具,从而提升其在特定任务上的性能。在优化过程中,我们设计了一个比较器模块,通过从训练数据中采样的正负例进行对比推理,迭代地为LLM智能体生成具有洞察力且全面的提示。我们在包含文本、视觉与关系信息的四个复杂多模态检索数据集,以及三个通用问答数据集上验证了AvaTaR的有效性。实验表明,AvaTaR在全部七项任务中均持续优于现有最优方法,在处理新案例时展现出强大的泛化能力,在检索数据集的Hit@1指标上平均相对提升14%,在问答数据集上平均相对提升13%。代码与数据集已开源:https://github.com/zou-group/avatar。