There is a growing need for Large Language Models (LLMs) to effectively use tools and external Application Programming Interfaces (APIs) to plan and complete tasks. As such, there is tremendous interest in methods that can acquire sufficient quantities of train and test data that involve calls to tools / APIs. Two lines of research have emerged as the predominant strategies for addressing this challenge. The first has focused on synthetic data generation techniques, while the second has involved curating task-adjacent datasets which can be transformed into API / Tool-based tasks. In this paper, we focus on the task of identifying, curating, and transforming existing datasets and, in turn, introduce API-BLEND, a large corpora for training and systematic testing of tool-augmented LLMs. The datasets mimic real-world scenarios involving API-tasks such as API / tool detection, slot filling, and sequencing of the detected APIs. We demonstrate the utility of the API-BLEND dataset for both training and benchmarking purposes.
翻译:随着大型语言模型(LLM)在有效利用工具和外部应用程序编程接口(API)以规划并完成任务方面的需求日益增长,如何获取足量涉及工具/API调用的训练与测试数据已成为研究热点。当前应对该挑战的主流策略主要有两条技术路线:其一聚焦于合成数据生成技术,其二则致力于整理可转化为API/工具相关任务的邻域数据集。本文着重研究现有数据集的识别、整理与转化工作,并由此提出API-BLEND——一个用于工具增强型大语言模型训练与系统性测试的大规模语料库。该数据集模拟了涉及API任务的真实场景,包括API/工具检测、槽位填充及检测到的API序列化操作。我们通过实验证明了API-BLEND数据集在训练与基准测试中的有效应用价值。