Modern language models demonstrate impressive coding capabilities in common programming languages (PLs), such as C++ and Python, but their performance in lower-resource PLs is often limited by training data availability. In principle, however, most programming skills are universal across PLs, so the capability acquired in one PL should transfer to others. In this work, we propose the task of zero-shot cross-programming-language transfer for code RL. We find that, for Llama-3.1, RL training for code generation in a source PL fails to improve, and sometimes even degrades, the performance on other target PLs. To address this, we hypothesize that effective RL transfer requires a generalizable SFT initialization before RL. We thus propose **Parallel-SFT**, an SFT strategy that incorporates "parallel programs" -- functionally equivalent code implemented in multiple PLs -- into the data mixture. We demonstrate that this improves transferability: when we subsequently perform RL on our Parallel-SFT model, we observe better generalization to unseen PLs. Analysis of the model internal representations reveals that Parallel-SFT leads to a more functionality-centric latent space, where equivalent programs across PLs are more tightly clustered, which we hypothesize to contribute to the improved transferability.
翻译:现代语言模型在常见编程语言(如C++和Python)中展现出令人瞩目的编码能力,但在低资源编程语言上的表现常受限于训练数据的可用性。然而,从原理上看,大多数编程技能在不同编程语言间具有通用性,因此在一个语言中获取的能力应能迁移至其他语言。本文提出了代码强化学习的零样本跨编程语言迁移任务。我们发现,对于Llama-3.1而言,针对源编程语言的代码生成进行强化学习训练,不仅未能提升目标语言的性能,有时甚至会导致性能下降。为解决此问题,我们假设有效的强化学习迁移需要在进行强化学习之前具备可泛化的SFT初始化。为此,我们提出**Parallel-SFT**,一种将"并行程序"——即用多种编程语言实现的功能等价代码——纳入数据混合的SFT策略。实验证明,该策略能提升迁移能力:当我们在Parallel-SFT模型上执行强化学习时,可观察到对未见编程语言更好的泛化效果。模型内部表示分析表明,Parallel-SFT能够构建更以功能为中心的潜在空间,在该空间中不同编程语言的等价程序聚类更紧密,我们推测这是迁移能力提升的关键因素。