Recent studies highlight various machine learning (ML)-based techniques for code clone detection, which can be integrated into developer tools such as static code analysis. With the advancements brought by ML in code understanding, ML-based code clone detectors could accurately identify and classify cloned pairs, especially semantic clones, but often operate as black boxes, providing little insight into the decision-making process. Post hoc explainers, on the other hand, aim to interpret and explain the predictions of these ML models after they are made, offering a way to understand the underlying mechanisms driving the model's decisions. However, current post hoc techniques require white-box access to the ML model or are computationally expensive, indicating a need for advanced post hoc explainers. In this paper, we propose a novel approach that leverages the in-context learning capabilities of large language models to elucidate the predictions made by the ML-based code clone detectors. We perform a study using ChatGPT-4 to explain the code clone results inferred by GraphCodeBERT. We found that our approach is promising as a post hoc explainer by giving the correct explanations up to 98% and offering good explanations 95% of the time. However, the explanations and the code line examples given by the LLM are useful in some cases. We also found that lowering the temperature to zero helps increase the accuracy of the explanation. Lastly, we list the insights that can lead to further improvements in future work. This study paves the way for future studies in using LLMs as a post hoc explainer for various software engineering tasks.
翻译:近期研究强调了多种基于机器学习(ML)的代码克隆检测技术,这些技术可集成至静态代码分析等开发者工具中。随着ML在代码理解领域的进步,基于ML的代码克隆检测器能够准确识别和分类克隆对(尤其是语义克隆),但其通常作为黑盒运行,对决策过程提供的信息有限。另一方面,事后解释器旨在对已完成的ML模型预测进行解释和说明,为理解驱动模型决策的底层机制提供了途径。然而,当前的事后解释技术需要白盒访问ML模型或计算成本高昂,这表明需要更先进的事后解释器。本文提出一种创新方法,利用大语言模型的上下文学习能力来阐释基于ML的代码克隆检测器所做的预测。我们使用ChatGPT-4对GraphCodeBERT推断的代码克隆结果进行解释研究。研究发现,我们的方法作为事后解释器具有良好前景,其正确解释率高达98%,且在95%的情况下能提供优质解释。然而,LLM给出的解释和代码行示例在某些情况下具有实用性。我们还发现将温度参数降至零有助于提高解释的准确性。最后,我们列出了可推动未来工作进一步改进的见解。本研究为未来将LLM用作各类软件工程任务的事后解释器奠定了基础。