Large Language Models for Code (LLM4Code) have become an integral part of developers' workflows, assisting with tasks such as code completion and generation. However, these models are found to exhibit undesired behaviors after their release, like generating buggy code, due to their extensive training on vast amounts of source code that contain such buggy code. The training data (usually coming from open-source software) keeps evolving, e.g., developers fix the buggy code. However, adapting such evolution to mitigate LLM4Code's undesired behaviors is non-trivial, as retraining models on the updated dataset usually takes much time and resources. This motivates us to propose the concept of hotfixing LLM4Code, mitigating LLM4Code's undesired behaviors effectively and efficiently with minimal negative effects. This paper mainly focuses on hotfixing LLM4Code to make them generate less buggy code and more fixed code. We begin by demonstrating that models from the popular CodeGen family frequently generate buggy code. Then, we define three learning objectives in hotfixing and design multiple loss functions for each objective: (1) learn the desired behaviors, (2) unlearn the undesired behaviors, and (3) retain knowledge of other code. We evaluate four different fine-tuning techniques for hotfixing the models and gain the following insights. Optimizing these three learning goals together, using LoRA (low-rank adaptation), effectively influences the model's behavior. Specifically, it increases the generation of fixed code by up to 108.42% and decreases the generation of buggy code by up to 50.47%. Statistical tests confirm that hotfixing does not significantly affect the models' functional correctness on the HumanEval benchmark. Additionally, to evaluate the generalizability of hotfixing by reducing the exposure of email addresses by 99.30%.
翻译:面向代码的大型语言模型已成为开发者工作流程中不可或缺的组成部分,在代码补全与生成等任务中提供辅助。然而,由于这些模型在包含缺陷代码的海量源代码上进行了广泛训练,研究发现它们在发布后会出现生成缺陷代码等不良行为。训练数据(通常来自开源软件)持续演进,例如开发者会修复缺陷代码。然而,将这种演进应用于缓解面向代码大型语言模型的不良行为并非易事,因为在更新后的数据集上重新训练模型通常需要耗费大量时间和资源。这促使我们提出面向代码大型语言模型热修复的概念,即以最小负面影响高效、有效地缓解其不良行为。本文主要聚焦于通过热修复使面向代码大型语言模型生成更少的缺陷代码和更多的修复后代码。我们首先论证了流行的CodeGen系列模型频繁生成缺陷代码的现象。随后,我们定义了热修复中的三个学习目标,并为每个目标设计了多种损失函数:(1)学习期望行为,(2)遗忘不良行为,(3)保留其他代码知识。我们评估了四种不同的微调技术对模型进行热修复的效果,并获得以下发现:使用LoRA(低秩自适应)技术联合优化这三个学习目标,能有效影响模型行为。具体而言,修复后代码的生成量最高提升108.42%,缺陷代码的生成量最高降低50.47%。统计测试证实热修复不会显著影响模型在HumanEval基准测试中的功能正确性。此外,通过将电子邮件地址的暴露率降低99.30%,我们评估了热修复技术的泛化能力。