Detecting and tracking code clones can ease various software development and maintenance tasks when changes in a code fragment should be propagated over all its copies. Several deep learning-based clone detection models have appeared in the literature for detecting syntactic and semantic clones, widely evaluated with the BigCloneBench dataset. However, class imbalance and the small number of semantic clones make BigCloneBench less ideal for interpreting model performance. Researchers also use other datasets such as GoogleCodeJam, OJClone, and SemanticCloneBench to understand model generalizability. To overcome the limitations of existing datasets, the GPT-assisted semantic and cross-language clone dataset GPTCloneBench has been released. However, how these models compare across datasets remains unclear. In this paper, we propose a multi-step evaluation approach for five state-of-the-art clone detection models leveraging existing benchmark datasets, including GPTCloneBench, and using mutation operators to study model ability. Specifically, we examine three highly-performing single-language models (ASTNN, GMN, CodeBERT) on BigCloneBench, SemanticCloneBench, and GPTCloneBench, testing their robustness with mutation operations. Additionally, we compare them against cross-language models (C4, CLCDSA) known for detecting semantic clones. While single-language models show high F1 scores for BigCloneBench, their performance on SemanticCloneBench varies (up to 20%). Interestingly, the cross-language model (C4) shows superior performance (around 7%) on SemanticCloneBench over other models and performs similarly on BigCloneBench and GPTCloneBench. On mutation-based datasets, C4 has more robust performance (less than 1% difference) compared to single-language models, which show high variability.
翻译:检测和跟踪代码克隆可以简化各种软件开发和维护任务,当代码片段发生变更时,这些变更应在其所有副本中同步更新。文献中已出现多种基于深度学习的克隆检测模型,用于检测语法克隆和语义克隆,这些模型大多使用BigCloneBench数据集进行广泛评估。然而,类别不平衡和语义克隆数量较少使得BigCloneBench在解释模型性能方面不够理想。研究人员也使用其他数据集(如GoogleCodeJam、OJClone和SemanticCloneBench)来理解模型的泛化能力。为克服现有数据集的局限性,已发布GPT辅助的语义及跨语言克隆数据集GPTCloneBench。然而,这些模型在不同数据集上的比较结果仍不明确。本文提出一种多步骤评估方法,利用现有基准数据集(包括GPTCloneBench)对五种最先进的克隆检测模型进行评估,并通过变异算子研究模型能力。具体而言,我们在BigCloneBench、SemanticCloneBench和GPTCloneBench上测试了三种高性能单语言模型(ASTNN、GMN、CodeBERT),并通过变异操作检验其鲁棒性。此外,我们将它们与已知擅长检测语义克隆的跨语言模型(C4、CLCDSA)进行比较。虽然单语言模型在BigCloneBench上表现出较高的F1分数,但在SemanticCloneBench上的性能存在差异(最高达20%)。有趣的是,跨语言模型(C4)在SemanticCloneBench上表现出优于其他模型的性能(约7%),在BigCloneBench和GPTCloneBench上的表现也相当。在基于变异的数据集上,C4展现出更稳健的性能(差异小于1%),而单语言模型则表现出较大的波动性。