Despite recent progress achieved by code large language models (LLMs), their remarkable abilities are largely dependent on fine-tuning on the high-quality data, posing challenges for data collection and annotation. To address this, current methods often design various data flywheels to gather complex code instructions, enabling models to handle more intricate tasks. However, these approaches typically rely on off-the-shelf datasets and data augmentation from the limited pool of proprietary LLMs (e.g., Claude, GPT4, and so on), which limits the diversity of the constructed data and makes it prone to systemic biases. In this paper, we propose WarriorCoder which learns from expert battles to address these limitations. Specifically, we create an arena for current expert code LLMs, where each model challenges and responds to others' challenges, with evaluations conducted by uninvolved judge models. This competitive framework generates novel training data constructed from scratch, harnessing the strengths of all participants. Experimental results demonstrate that WarriorCoder achieves competitive performance compared to previous methods, even without relying on proprietary LLMs.
翻译:尽管代码大语言模型近期取得了进展,但其卓越能力在很大程度上依赖于对高质量数据的微调,这给数据收集和标注带来了挑战。为解决此问题,现有方法通常设计各种数据飞轮来收集复杂的代码指令,使模型能够处理更复杂的任务。然而,这些方法通常依赖于现成的数据集以及从有限的专有大语言模型池中进行数据增强,这限制了所构建数据的多样性,并使其容易受到系统性偏见的影响。在本文中,我们提出WarriorCoder,通过专家模型对战学习来解决这些局限性。具体而言,我们为当前先进的代码大语言模型创建了一个竞技场,每个模型向其他模型发起挑战并回应挑战,由未参与对战的评判模型进行评估。这一竞争框架从零开始生成新颖的训练数据,充分利用所有参与者的优势。实验结果表明,即使不依赖专有大语言模型,WarriorCoder也能取得与先前方法相媲美的性能。