Large language models (LLMs) already excel at writing code in high-resource languages such as Python and JavaScript, yet stumble on low-resource languages that remain essential to science and engineering. Besides the obvious shortage of pre-training data, post-training itself is a bottleneck: every new language seems to require new datasets, test harnesses, and reinforcement-learning (RL) infrastructure. We introduce Agnostics, a language-agnostic post-training pipeline that eliminates this per-language engineering. The key idea is to judge code solely by its externally observable behavior, so a single verifier can test solutions written in any language. Concretely, we (i) use an LLM to rewrite existing unit-test datasets into an I/O format, (ii) supply a short configuration that tells the verifier how to compile and run a target language, and (iii) apply reinforcement learning with verifiable rewards (RLVR) in a robust code execution environment. Applied to five low-resource languages--Lua, Julia, R, OCaml, and Fortran--Agnostics (1) improves Qwen-3 4B to performance that rivals other 16B-70B open-weight models; (2) scales cleanly to larger and diverse model families (Qwen-3 8B, DeepSeek Coder 6.7B Instruct, Phi 4 Mini); and (3) for ${\le} 16$B parameter models, sets new state-of-the-art pass@1 results on MultiPL-E and a new multi-language version of LiveCodeBench that we introduce. We release the language-agnostic training datasets (Ag-MBPP-X, Ag-Codeforces-X, Ag-LiveCodeBench-X), training code, and ready-to-use configurations, making RL post-training in any programming language as simple as editing a short YAML file.
翻译:大型语言模型(LLMs)在编写高资源语言(如Python和JavaScript)代码方面已表现出色,但在对科学与工程至关重要的低资源语言上仍显不足。除了明显的预训练数据短缺外,后训练过程本身也是一个瓶颈:似乎每种新语言都需要新的数据集、测试框架和强化学习(RL)基础设施。我们提出了Agnostics,一种语言无关的后训练流程,消除了这种针对每种语言的工程开销。其核心思想是仅通过代码外部可观察的行为来评判代码,使得单一验证器能够测试用任何语言编写的解决方案。具体而言,我们(i)使用LLM将现有的单元测试数据集重写为I/O格式,(ii)提供一个简短的配置文件,告知验证器如何编译和运行目标语言,(iii)在鲁棒的代码执行环境中应用带有可验证奖励的强化学习(RLVR)。在五种低资源语言——Lua、Julia、R、OCaml和Fortran——上应用Agnostics后,结果表明:(1)它将Qwen-3 4B模型的性能提升至可与其它16B-70B开源权重模型相媲美的水平;(2)能够清晰地扩展到更大、更多样的模型系列(Qwen-3 8B、DeepSeek Coder 6.7B Instruct、Phi 4 Mini);(3)对于参数量${\le} 16$B的模型,在MultiPL-E以及我们新引入的多语言版本LiveCodeBench上,创造了新的最先进的pass@1结果。我们发布了语言无关的训练数据集(Ag-MBPP-X、Ag-Codeforces-X、Ag-LiveCodeBench-X)、训练代码和即用型配置文件,使得在任何编程语言中进行RL后训练变得如同编辑一个简短的YAML文件一样简单。