Property-based testing (PBT), while an established technique in the software testing research community, is still relatively underused in real-world software. Pain points in writing property-based tests include implementing diverse random input generators and thinking of meaningful properties to test. Developers, however, are more amenable to writing documentation; plenty of library API documentation is available and can be used as natural language specifications for PBTs. As large language models (LLMs) have recently shown promise in a variety of coding tasks, we investigate using modern LLMs to automatically synthesize PBTs using two prompting techniques. A key challenge is to rigorously evaluate the LLM-synthesized PBTs. We propose a methodology to do so considering several properties of the generated tests: (1) validity, (2) soundness, and (3) property coverage, a novel metric that measures the ability of the PBT to detect property violations through generation of property mutants. In our evaluation on 40 Python library API methods across three models (GPT-4, Gemini-1.5-Pro, Claude-3-Opus), we find that with the best model and prompting approach, a valid and sound PBT can be synthesized in 2.4 samples on average. We additionally find that our metric for determining soundness of a PBT is aligned with human judgment of property assertions, achieving a precision of 100% and recall of 97%. Finally, we evaluate the property coverage of LLMs across all API methods and find that the best model (GPT-4) is able to automatically synthesize correct PBTs for 21% of properties extractable from API documentation.
翻译:基于属性的测试(PBT)虽然是软件测试研究领域的一项成熟技术,但在实际软件开发中仍相对较少使用。编写基于属性测试的痛点包括实现多样化的随机输入生成器以及构思有意义的待测属性。然而,开发者更倾向于编写文档;大量库API文档是可用的,并且可以作为PBT的自然语言规约。鉴于大型语言模型(LLMs)最近在各种编码任务中展现出潜力,我们研究了使用两种提示技术让现代LLMs自动合成PBT。一个关键挑战是如何严格评估LLM合成的PBT。我们提出了一种评估方法,该方法考虑了生成测试的几个特性:(1)有效性,(2)可靠性,以及(3)属性覆盖率——这是一种新颖的度量指标,用于衡量PBT通过生成属性变异体来检测属性违反的能力。在我们对涵盖三个模型(GPT-4、Gemini-1.5-Pro、Claude-3-Opus)的40个Python库API方法的评估中,我们发现,使用最佳模型和提示方法,平均只需2.4个样本即可合成一个有效且可靠的PBT。我们还发现,我们用于确定PBT可靠性的度量指标与人类对属性断言的判断具有一致性,达到了100%的精确率和97%的召回率。最后,我们评估了LLMs在所有API方法上的属性覆盖率,发现最佳模型(GPT-4)能够为可从API文档中提取的21%的属性自动合成正确的PBT。