Code understanding and generation have fast become some of the most popular applications of language models (LMs). Nonetheless, research on multilingual aspects of Code-LMs (i.e., LMs for code generation) such as cross-lingual transfer between different programming languages, language-specific data augmentation, and post-hoc LM adaptation, alongside exploitation of data sources other than the original textual content, has been much sparser than for their natural language counterparts. In particular, most mainstream Code-LMs have been pre-trained on source code files alone. In this work, we investigate the prospect of leveraging readily available compiler intermediate representations (IR) - shared across programming languages - to improve the multilingual capabilities of Code-LMs and facilitate cross-lingual transfer. To this end, we first compile SLTrans, a parallel dataset consisting of nearly 4M self-contained source code files coupled with respective intermediate representations. Next, starting from various base Code-LMs (ranging in size from 1.1B to 7.3B parameters), we carry out continued causal language modelling training on SLTrans, forcing the Code-LMs to (1) learn the IR language and (2) align the IR constructs with respective constructs of various programming languages. Our resulting models, dubbed IRCoder, display sizeable and consistent gains across a wide variety of code generation tasks and metrics, including prompt robustness, multilingual code completion, code understanding, and instruction following.
翻译:代码理解与生成已迅速成为语言模型最热门的应用之一。然而,针对代码语言模型(即用于代码生成的LM)在多语言方面的研究——例如不同编程语言间的跨语言迁移、特定语言的数据增强、事后模型适应性调整,以及利用原始文本内容之外的数据源——相较于自然语言模型的相关研究仍显薄弱。特别是,主流代码语言模型大多仅基于源代码文件进行预训练。本文探索利用编程语言间共享的现成编译器中间表示来提升代码语言模型的多语言能力并促进跨语言迁移。为此,我们首先构建了SLTrans并行数据集,包含近400万个自包含源代码文件及其对应的中间表示。随后,基于不同规模的基座代码语言模型(参数规模从11亿到73亿),我们使用SLTrans进行持续的因果语言建模训练,强制模型(1)学习中间表示语言,(2)将中间表示结构与多种编程语言结构对齐。由此诞生的IRCoder模型在代码生成任务与评估指标上展现出显著且稳定的性能提升,涵盖提示鲁棒性、多语言代码补全、代码理解及指令遵循等场景。