In this paper, we propose shifting the focus of robustness evaluation for Neural Program Repair (NPR) techniques toward naturally-occurring data transformations. To accomplish this, we first examine the naturalness of semantic-preserving transformations through a two-stage human study. This study includes (1) interviews with senior software developers to establish concrete criteria for evaluating the naturalness of these transformations, and (2) a survey involving 10 developers to assess the naturalness of 1,178 transformations, i.e., pairs of original and transformed programs, applied to 225 real-world bugs. Our findings show that only 60% of these transformations are deemed natural, while 20% are considered unnatural, with strong agreement among annotators. Moreover, the unnaturalness of these transformations significantly impacts both their applicability to benchmarks and the conclusions drawn from robustness testing. Next, we conduct natural robustness testing on NPR techniques to assess their true effectiveness against real-world data variations. Our experimental results reveal a substantial number of prediction changes in NPR techniques, leading to significant reductions in both plausible and correct patch rates when comparing performance on the original and transformed datasets. Additionally, we observe notable differences in performance improvements between NPR techniques, suggesting potential biases on NPR evaluation introduced by limited datasets. Finally, we propose an LLM-based metric to automate the assessment of transformation naturalness, ensuring the scalability of natural robustness testing.
翻译:本文提出将神经程序修复(NPR)技术的鲁棒性评估重点转向自然发生的数据变换。为实现这一目标,我们首先通过一项两阶段的人为研究来检验语义保持变换的自然性。该研究包括:(1)与资深软件开发人员进行访谈,以建立评估这些变换自然性的具体标准;(2)一项涉及10名开发人员的调查,用于评估应用于225个真实世界缺陷的1,178个变换(即原始程序与变换后程序对)的自然性。我们的研究结果表明,仅60%的变换被认为是自然的,而20%被认为是不自然的,且标注者之间具有高度一致性。此外,这些变换的非自然性显著影响了它们在基准测试中的适用性以及从鲁棒性测试中得出的结论。接下来,我们对NPR技术进行自然鲁棒性测试,以评估其应对真实世界数据变化的实际有效性。实验结果表明,NPR技术中存在大量预测变化,导致在比较原始数据集与变换数据集上的性能时,合理补丁率和正确补丁率均显著下降。此外,我们观察到不同NPR技术之间的性能改进存在显著差异,这表明有限数据集可能给NPR评估引入了潜在偏差。最后,我们提出一种基于LLM的自动化度量方法,用于评估变换的自然性,从而确保自然鲁棒性测试的可扩展性。