API testing has increasing demands for software companies. Prior API testing tools were aware of certain types of dependencies that needed to be concise between operations and parameters. However, their approaches, which are mostly done manually or using heuristic-based algorithms, have limitations due to the complexity of these dependencies. In this paper, we present KAT (Katalon API Testing), a novel AI-driven approach that leverages the large language model GPT in conjunction with advanced prompting techniques to autonomously generate test cases to validate RESTful APIs. Our comprehensive strategy encompasses various processes to construct an operation dependency graph from an OpenAPI specification and to generate test scripts, constraint validation scripts, test cases, and test data. Our evaluation of KAT using 12 real-world RESTful services shows that it can improve test coverage, detect more undocumented status codes, and reduce false positives in these services in comparison with a state-of-the-art automated test generation tool. These results indicate the effectiveness of using the large language model for generating test scripts and data for API testing.
翻译:API测试对软件公司的需求日益增长。以往的API测试工具虽然能够识别操作与参数间需要保持简洁的特定类型依赖关系,但由于这些依赖关系的复杂性,主要依赖人工或基于启发式算法的方法存在局限性。本文提出KAT(Katalon API Testing)——一种新颖的人工智能驱动方法,该方法利用大语言模型GPT结合先进的提示技术,自主生成测试用例以验证RESTful API。我们的综合策略涵盖多个流程:从OpenAPI规范构建操作依赖图,并生成测试脚本、约束验证脚本、测试用例及测试数据。通过对12个真实场景的RESTful服务进行评估,结果表明相较于最先进的自动化测试生成工具,KAT能够提升测试覆盖率、检测更多未文档化的状态码,并降低误报率。这些发现证实了利用大语言模型生成API测试脚本与数据的有效性。