LLMs' code generation capabilities have yielded substantial improvements in the effectiveness of programming tasks. However, LLM-generated code still suffers from compilation and runtime errors. Existing offline preference optimization methods primarily focus on enhancing LLMs' coding abilities using pass/fail signals in the preference data, overlooking the deep-level error types in the failed codes. To address this, we propose Adaptively Progressive Preference Optimization (AP2O) for coding (i.e., AP2O-Coder), a method that guides LLMs adaptively and methodically to reduce code errors for code generation. Specifically, we construct an error notebook from failed codes and progressively optimize the LLM to correct errors type by type. Furthermore, we adaptively replay error types to tailor to the LLM's changing weaknesses throughout the training process. Through extensive experiments on both code and general LLMs (Llama, Qwen, and DeepSeek series) with parameters ranging from 0.5B to 34B, our AP2O-Coder improves code generation performance by up to 3% in pass@k while using less preference data. Code: https://github.com/TsingZ0/AP2O
翻译:大型语言模型(LLM)的代码生成能力显著提升了编程任务的效率。然而,LLM生成的代码仍存在编译与运行时错误。现有的离线偏好优化方法主要利用偏好数据中的通过/失败信号来增强LLM的编码能力,却忽视了失败代码中深层次的错误类型。为解决这一问题,我们提出了面向代码生成的自适应渐进式偏好优化方法(即AP2O-Coder),该方法能够自适应、系统性地引导LLM减少代码生成中的错误。具体而言,我们从失败代码中构建错误笔记,并逐步优化LLM以逐类修正错误。此外,我们通过自适应重放错误类型,以针对训练过程中LLM动态变化的薄弱环节进行针对性优化。通过在参数规模从0.5B到34B的代码专用及通用LLM(Llama、Qwen与DeepSeek系列)上进行大量实验,我们的AP2O-Coder在减少偏好数据使用量的同时,将代码生成性能在pass@k指标上最高提升了3%。代码地址:https://github.com/TsingZ0/AP2O