High-quality test datasets are crucial for assessing the reliability of Deep Neural Networks (DNNs). Mutation testing evaluates test dataset quality based on their ability to uncover injected faults in DNNs as measured by mutation score (MS). At the same time, its high computational cost motivates researchers to seek alternative test adequacy criteria. We propose Latent Space Class Dispersion (LSCD), a novel metric to quantify the quality of test datasets for DNNs. It measures the degree of dispersion within a test dataset as observed in the latent space of a DNN. Our empirical study shows that LSCD reveals and quantifies deficiencies in the test dataset of three popular benchmarks pertaining to image classification tasks using DNNs. Corner cases generated using automated fuzzing were found to help enhance fault detection and improve the overall quality of the original test sets calculated by MS and LSCD. Our experiments revealed a high positive correlation (0.87) between LSCD and MS, significantly higher than the one achieved by the well-studied Distance-based Surprise Coverage (0.25). These results were obtained from 129 mutants generated through pre-training mutation operators, with statistical significance and a high validity of corner cases. These observations suggest that LSCD can serve as a cost-effective alternative to expensive mutation testing, eliminating the need to generate mutant models while offering comparably valuable insights into test dataset quality for DNNs.
翻译:高质量的测试数据集对于评估深度神经网络(DNN)的可靠性至关重要。变异测试基于测试数据集揭示DNN中注入故障的能力(以变异分数衡量)来评估其质量。然而,其高昂的计算成本促使研究者寻求替代的测试充分性准则。我们提出了潜在空间类别离散度(LSCD),这是一种用于量化DNN测试数据集质量的新颖度量指标。它测量在DNN的潜在空间中观察到的测试数据集内部的离散程度。我们的实证研究表明,LSCD揭示并量化了三个流行基准测试中用于DNN图像分类任务的测试数据集的缺陷。研究发现,使用自动化模糊测试生成的边界案例有助于增强故障检测能力,并提高原始测试集在变异分数和LSCD计算下的整体质量。我们的实验显示,LSCD与变异分数之间存在高度正相关性(0.87),显著高于经过深入研究的基于距离的意外覆盖率所达到的相关性(0.25)。这些结果是通过预训练变异算子生成的129个变异体获得的,具有统计显著性且边界案例有效性高。这些观察结果表明,LSCD可以作为昂贵变异测试的一种经济高效的替代方案,无需生成变异模型,同时能为DNN测试数据集质量提供具有可比价值的见解。