With the advent of large language models (LLMs), numerous software service providers (SSPs) are dedicated to developing LLMs customized for code generation tasks, such as CodeLlama and Copilot. However, these LLMs can be leveraged by attackers to create malicious software, which may pose potential threats to the software ecosystem. For example, they can automate the creation of advanced phishing malware. To address this issue, we first conduct an empirical study and design a prompt dataset, MCGTest, which involves approximately 400 person-hours of work and consists of 406 malicious code generation tasks. Utilizing this dataset, we propose MCGMark, the first robust, code structure-aware, and encodable watermarking approach to trace LLM-generated code. We embed encodable information by controlling the token selection and ensuring the output quality based on probabilistic outliers. Additionally, we enhance the robustness of the watermark by considering the structural features of malicious code, preventing the embedding of the watermark in easily modified positions, such as comments. We validate the effectiveness and robustness of MCGMark on the DeepSeek-Coder. MCGMark achieves an embedding success rate of 88.9% within a maximum output limit of 400 tokens. Furthermore, it also demonstrates strong robustness and has minimal impact on the quality of the output code. Our approach assists SSPs in tracing and holding responsible parties accountable for malicious code generated by LLMs.
翻译:随着大语言模型(LLM)的出现,众多软件服务提供商(SSP)致力于开发专门用于代码生成任务的定制化LLM,例如CodeLlama和Copilot。然而,攻击者可能利用这些LLM创建恶意软件,从而对软件生态系统构成潜在威胁。例如,它们可以自动化生成高级钓鱼恶意软件。为解决此问题,我们首先进行了一项实证研究并设计了一个提示数据集MCGTest,该数据集涉及约400人时的工作量,包含406个恶意代码生成任务。利用此数据集,我们提出了MCGMark,这是首个用于追踪LLM生成代码的鲁棒、代码结构感知且可编码的水印方法。我们通过控制令牌选择并基于概率异常值确保输出质量,从而嵌入可编码信息。此外,我们通过考虑恶意代码的结构特征来增强水印的鲁棒性,防止将水印嵌入易于修改的位置(如注释)。我们在DeepSeek-Coder上验证了MCGMark的有效性和鲁棒性。在最大输出限制为400个令牌内,MCGMark实现了88.9%的嵌入成功率。此外,它还表现出强大的鲁棒性,并且对输出代码质量的影响极小。我们的方法有助于SSP追踪LLM生成的恶意代码并追究相关责任方的责任。