Model-based testing (MBT) is a method that supports the design and execution of test cases by models that specify the intended behaviors of a system under test. While systematic literature reviews on MBT in general exist, the state of the art on modeling and testing performance requirements has seen much less attention. Therefore, we conducted a systematic mapping study on model-based performance testing. Then, we studied natural language software requirements specifications in order to understand which and how performance requirements are typically specified. Since none of the identified MBT techniques supported a major benefit of modeling, namely identifying faults in requirements specifications, we developed the Performance Requirements verificatiOn and Test EnvironmentS generaTion approach (PRO-TEST). Finally, we evaluated PRO-TEST on 149 requirements specifications. We found and analyzed 57 primary studies from the systematic mapping study and extracted 50 performance requirements models. However, those models don't achieve the goals of MBT, which are validating requirements, ensuring their testability, and generating the minimum required test cases. We analyzed 77 Software Requirements Specification (SRS) documents, extracted 149 performance requirements from those SRS, and illustrate that with PRO-TEST we can model performance requirements, find issues in those requirements and detect missing ones. We detected three not-quantifiable requirements, 43 not-quantified requirements, and 180 underspecified parameters in the 149 modeled performance requirements. Furthermore, we generated 96 test environments from those models. By modeling performance requirements with PRO-TEST, we can identify issues in the requirements related to their ambiguity, measurability, and completeness. Additionally, it allows to generate parameters for test environments.
翻译:基于模型的测试(MBT)是一种通过描述被测系统预期行为的模型来支持测试用例设计与执行的方法。尽管存在关于MBT的系统性文献综述,但针对性能需求建模与测试的最新研究却鲜有关注。为此,我们首先对基于模型的性能测试进行了系统映射研究。随后,我们分析了自然语言软件需求规格说明,以了解性能需求的具体规范方式及其典型特征。由于现有MBT技术均未体现建模的重要优势——即识别需求规格说明中的缺陷,我们提出了性能需求验证与测试环境生成方法(PRO-TEST)。最后,我们在149份需求规格说明上评估了PRO-TEST。通过系统映射研究,我们发现并分析了57篇原始文献,提取了50个性能需求模型。然而,这些模型未能实现MBT的核心目标:验证需求、确保其可测试性,以及生成最少必要测试用例。我们分析了77份软件需求规格说明(SRS)文档,从中提取了149条性能需求,并证明利用PRO-TEST可对这些性能需求进行建模,发现其中的问题并检测缺失的需求。在149条建模的性能需求中,我们检测到3条不可量化的需求、43条未量化的需求以及180个参数未充分指定的情况。此外,我们还从这些模型中生成了96个测试环境。通过使用PRO-TEST对性能需求建模,我们能够识别需求中与模糊性、可测量性和完整性相关的问题,同时还能生成测试环境的参数。