Recent progress in large language models (LLMs) for code generation has raised serious concerns about intellectual property protection. Malicious users can exploit LLMs to produce paraphrased versions of proprietary code that closely resemble the original. While the potential for LLM-assisted code paraphrasing continues to grow, research on detecting it remains limited, underscoring an urgent need for detection system. We respond to this need by proposing two tasks. The first task is to detect whether code generated by an LLM is a paraphrased version of original human-written code. The second task is to identify which LLM is used to paraphrase the original code. For these tasks, we construct a dataset LPcode consisting of pairs of human-written code and LLM-paraphrased code using various LLMs. We statistically confirm significant differences in the coding styles of human-written and LLM-paraphrased code, particularly in terms of naming consistency, code structure, and readability. Based on these findings, we develop LPcodedec, a detection method that identifies paraphrase relationships between human-written and LLM-generated code, and discover which LLM is used for the paraphrasing. LPcodedec outperforms the best baselines in two tasks, improving F1 scores by 2.64% and 15.17% while achieving speedups of 1,343x and 213x, respectively. Our code and data are available at https://github.com/Shinwoo-Park/detecting_llm_paraphrased_code_via_coding_style_features.
翻译:代码生成大语言模型(LLM)的最新进展引发了关于知识产权保护的严重关切。恶意用户可利用LLM生成与原始专有代码高度相似的释义版本。尽管LLM辅助代码释义的潜力持续增长,但针对其检测的研究仍然有限,凸显了对检测系统的迫切需求。为应对这一需求,我们提出两项任务:首项任务是检测LLM生成的代码是否为原始人工编写代码的释义版本;次项任务是识别用于对原始代码进行释义的LLM。针对这些任务,我们构建了LPcode数据集,其中包含使用多种LLM生成的人工编写代码与LLM释义代码的配对样本。我们通过统计分析证实,人工编写代码与LLM释义代码在编码风格上存在显著差异,尤其在命名一致性、代码结构和可读性方面。基于这些发现,我们开发了LPcodedec检测方法,该方法能识别人工编写代码与LLM生成代码之间的释义关系,并发现用于释义的LLM。LPcodedec在两项任务中均优于最佳基线模型,将F1分数分别提升2.64%和15.17%,同时实现1,343倍和213倍的加速。我们的代码与数据已公开于https://github.com/Shinwoo-Park/detecting_llm_paraphrased_code_via_coding_style_features。