Direct preference learning offers a promising and computation-efficient beyond supervised fine-tuning (SFT) for improving code generation in coding large language models (LMs). However, the scarcity of reliable preference data is a bottleneck for the performance of direct preference learning to improve the coding accuracy of code LMs. In this paper, we introduce \underline{\textbf{D}}irect Preference Learning with Only \underline{\textbf{S}}elf-Generated \underline{\textbf{T}}ests and \underline{\textbf{C}}ode (DSTC), a framework that leverages only self-generated code snippets and tests to construct reliable preference pairs such that direct preference learning can improve LM coding accuracy without external annotations. DSTC combines a minimax selection process and test-code concatenation to improve preference pair quality, reducing the influence of incorrect self-generated tests and enhancing model performance without the need for costly reward models. When applied with direct preference learning methods such as Direct Preference Optimization (DPO) and Kahneman-Tversky Optimization (KTO), DSTC yields stable improvements in coding accuracy (pass@1 score) across diverse coding benchmarks, including HumanEval, MBPP, and BigCodeBench, demonstrating both its effectiveness and scalability for models of various sizes. This approach autonomously enhances code generation accuracy across LLMs of varying sizes, reducing reliance on expensive annotated coding datasets.
翻译:直接偏好学习为改进编码大语言模型(LMs)的代码生成能力提供了一种前景广阔且计算高效的监督微调(SFT)替代方案。然而,可靠偏好数据的稀缺性是制约直接偏好学习提升代码语言模型编码准确性的性能瓶颈。本文提出 \underline{\textbf{D}}irect Preference Learning with Only \underline{\textbf{S}}elf-Generated \underline{\textbf{T}}ests and \underline{\textbf{C}}ode(DSTC),该框架仅利用自生成的代码片段和测试来构建可靠的偏好对,使得直接偏好学习能够在无需外部标注的情况下提升语言模型的编码准确性。DSTC结合了极小极大选择过程与测试-代码拼接策略,以提高偏好对的质量,减少错误自生成测试的影响,并在无需昂贵奖励模型的情况下增强模型性能。当与直接偏好优化(DPO)和卡尼曼-特沃斯基优化(KTO)等直接偏好学习方法结合使用时,DSTC在HumanEval、MBPP和BigCodeBench等多种编码基准测试中均能稳定提升编码准确性(pass@1分数),证明了其对不同规模模型的有效性和可扩展性。该方法能够自主提升不同规模大语言模型的代码生成准确性,降低对昂贵标注编码数据集的依赖。