Large language models (LLMs) for code have become indispensable in various domains, including code generation, reasoning tasks and agent systems.While open-access code LLMs are increasingly approaching the performance levels of proprietary models, high-quality code LLMs suitable for rigorous scientific investigation, particularly those with reproducible data processing pipelines and transparent training protocols, remain limited. The scarcity is due to various challenges, including resource constraints, ethical considerations, and the competitive advantages of keeping models advanced. To address the gap, we introduce OpenCoder, a top-tier code LLM that not only achieves performance comparable to leading models but also serves as an ``open cookbook'' for the research community. Unlike most prior efforts, we release not only model weights and inference code, but also the reproducible training data, complete data processing pipeline, rigorous experimental ablation results, and detailed training protocols for open scientific research. Through this comprehensive release, we identify the key ingredients for building a top-tier code LLM: (1) code optimized heuristic rules for data cleaning and methods for data deduplication, (2) recall of text corpus related to code and (3) high-quality synthetic data in both annealing and supervised fine-tuning stages. By offering this level of openness, we aim to broaden access to all aspects of a top-tier code LLM, with OpenCoder serving as both a powerful model and an open foundation to accelerate research, and enable reproducible advancements in code AI.
翻译:代码大语言模型(LLMs)在代码生成、推理任务和智能体系统等多个领域已成为不可或缺的工具。尽管开源代码LLMs的性能正日益接近专有模型,但适用于严谨科学研究的、具备可复现数据处理流程和透明训练协议的高质量代码LLMs仍然有限。这种稀缺性源于多重挑战,包括资源限制、伦理考量以及保持模型先进性的竞争优势。为弥补这一空白,我们推出OpenCoder——一个不仅性能可比肩领先模型,更能为研究社区提供“开放式配方手册”的顶级代码LLM。与以往多数研究不同,我们不仅公开了模型权重与推理代码,还发布了可复现的训练数据、完整的数据处理流程、严谨的实验消融结果以及详细的训练协议,以支持开放式科学研究。通过这一全面开源,我们揭示了构建顶级代码LLM的关键要素:(1)针对代码优化的数据清洗启发式规则与去重方法;(2)代码相关文本语料的召回;(3)退火阶段与监督微调阶段的高质量合成数据。通过提供如此深度的开放性,我们旨在拓宽对顶级代码LLM全维度的访问途径,使OpenCoder既能作为强大模型,也能成为加速代码AI研究、推动可复现进展的开放基石。