Large language models (LLMs) are often augmented with tools to solve complex tasks. By generating code snippets and executing them through task-specific Application Programming Interfaces (APIs), they can offload certain functions to dedicated external modules, such as image encoding and performing calculations. However, most existing approaches to augment LLMs with tools are constrained by general-purpose APIs and lack the flexibility for tailoring them to specific tasks. In this work, we present CRAFT, a general tool creation and retrieval framework for LLMs. It creates toolsets specifically curated for the tasks and equips LLMs with a component that retrieves tools from these sets to enhance their capability to solve complex tasks. For each task, we collect specific code solutions by prompting GPT-4 to solve the training examples. Following a validation step ensuring the correctness, these solutions are abstracted into code snippets to enhance reusability, and deduplicated for higher quality. At inference time, the language model retrieves snippets from the toolsets and then executes them or generates the output conditioning on the retrieved snippets. Our method is designed to be flexible and offers a plug-and-play approach to adapt off-the-shelf LLMs to unseen domains and modalities, without any finetuning. Experiments on vision-language, tabular processing, and mathematical reasoning tasks show that our approach achieves substantial improvements compared to strong baselines. In addition, our in-depth analysis reveals that: (1) consistent performance improvement can be achieved by scaling up the number of tools and the capability of the backbone models; (2) each component of our approach contributes to the performance gains; (3) the created tools are well-structured and reliable with low complexity and atomicity. The code is available at https://github.com/lifan-yuan/CRAFT.
翻译:大型语言模型(LLMs)常通过工具增强来解决复杂任务。通过生成代码片段并通过任务特定的应用程序编程接口(API)执行,它们可以将某些功能(如图像编码和数值计算)外包给专用的外部模块。然而,现有的大多数LLM工具增强方法受限于通用型API,缺乏针对特定任务的定制灵活性。本研究提出CRAFT——一种用于LLM的通用工具创建与检索框架。它构建了针对特定任务定制的工具集,并为LLM配备一个从这些集合中检索工具的组件,以增强其解决复杂任务的能力。对于每项任务,我们通过提示GPT-4解决训练示例来收集特定代码解决方案。经过验证步骤确保正确性后,这些解决方案被抽象为代码片段以提高可复用性,并通过去重提升质量。在推理阶段,语言模型从工具集中检索片段,随后执行这些片段或基于检索片段生成输出。本方法设计灵活,提供即插即用方案,无需微调即可将现成LLM适配至未见领域和模态。在视觉-语言、表格数据处理和数学推理任务上的实验表明,与强基线方法相比,我们的方法取得了显著改进。此外,深入分析揭示:(1)通过扩大工具数量与骨干模型能力可实现性能的持续提升;(2)方法的每个组件均贡献于性能增益;(3)创建的工具结构良好、可靠性高,兼具低复杂性与原子性。代码开源于https://github.com/lifan-yuan/CRAFT。