Large code models (LCMs) have remarkably advanced the field of code intelligence. Despite their impressive capabilities, they still face practical employment challenges, such as high costs, limited accessibility of proprietary LCMs, and adaptability issues of ultra-large LCMs. These challenges highlight the critical need for more accessible, lightweight yet effective LCMs. In this paper, we propose IterKD, an Iter Knowledge Distillation framework, which aims at continually transferring the programming capabilities of larger, advanced LCMs (Teacher) to smaller, less powerful LCMs (Student). IterKD consists of three stages in one cycle: (1) Correct-and-Fault Knowledge Delivery stage aims at improving the student models capability to recognize errors while ensuring its basic programming skill during the knowledge transferring, which involves correctness-aware supervised learning and fault-aware contrastive learning methods. (2) Multi-view Feedback stage aims at measuring the quality of results generated by the student model from two views, including model-based and static tool-based measurement; (3) Feedback-based Knowledge Update stage aims at updating the student model adaptively by generating new questions at different difficulty levels, in which the difficulty levels are categorized based on the feedback in the last stage. By performing the training cycle iteratively, the student model is continuously refined through learning more advanced programming skills from the teacher model. Finally, based on the proposed IterKD framework, we develop a lightweight yet effective LCM, named IterCoder, which is built upon CodeLlama-7B. Experimental results show that IterCoder achieves a Pass@1 score of 65.2 on the HumanEval benchmark, outperforming over-30B-sized LCMs by an average of 47.51% and surpassing comparable-sized LCMs by an average of 118.47%.
翻译:大型代码模型(LCMs)显著推动了代码智能领域的发展。尽管其能力令人瞩目,但在实际应用中仍面临诸多挑战,例如高昂成本、专有LCMs的有限可访问性以及超大型LCMs的适应性问题。这些挑战凸显了对更易获取、轻量级且高效的LCMs的迫切需求。本文提出IterKD,一种迭代知识蒸馏框架,旨在持续地将更大型、更先进的LCMs(教师模型)的编程能力迁移至更小型、能力较弱的LCMs(学生模型)。IterKD在一个循环中包含三个阶段:(1)正确与错误知识传递阶段,旨在提升学生模型识别错误的能力,同时在知识迁移过程中确保其基本编程技能,该阶段涉及正确性感知的监督学习和错误感知的对比学习方法。(2)多视角反馈阶段,旨在从两个视角衡量学生模型生成结果的质量,包括基于模型的度量和基于静态工具的度量;(3)基于反馈的知识更新阶段,旨在通过生成不同难度级别的新问题来自适应地更新学生模型,其中难度级别根据上一阶段的反馈进行分类。通过迭代执行训练循环,学生模型得以持续精炼,从教师模型中学习更先进的编程技能。最终,基于所提出的IterKD框架,我们开发了一个轻量级且高效的LCM,命名为IterCoder,其基于CodeLlama-7B构建。实验结果表明,IterCoder在HumanEval基准测试中取得了65.2的Pass@1分数,平均超越30B以上规模的LCMs达47.51%,并平均超越同等规模的LCMs达118.47%。