This paper presents NT-Java-1.1B, an open-source specialized code language model built on StarCoderBase-1.1B, designed for coding tasks in Java programming. NT-Java-1.1B achieves state-of-the-art performance, surpassing its base model and majority of other models of similar size on MultiPL-E Java code benchmark. While there have been studies on extending large, generic pre-trained models to improve proficiency in specific programming languages like Python, similar investigations on small code models for other programming languages are lacking. Large code models require specialized hardware like GPUs for inference, highlighting the need for research into building small code models that can be deployed on developer desktops. This paper addresses this research gap by focusing on the development of a small Java code model, NT-Java-1.1B, and its quantized versions, which performs comparably to open models around 1.1B on MultiPL-E Java code benchmarks, making them ideal for desktop deployment. This paper establishes the foundation for specialized models across languages and sizes for a family of NT Models.
翻译:本文提出了NT-Java-1.1B,一个基于StarCoderBase-1.1B构建的开源专用代码语言模型,专为Java编程的编码任务设计。NT-Java-1.1B在MultiPL-E Java代码基准测试中取得了最先进的性能,超越了其基础模型及大多数同类规模的其他模型。尽管已有研究通过扩展大型通用预训练模型来提升其在特定编程语言(如Python)上的熟练度,但针对其他编程语言的小型代码模型的类似研究仍然缺乏。大型代码模型需要GPU等专用硬件进行推理,这凸显了对可部署于开发者桌面的小型代码模型进行研究的需求。本文通过专注于开发小型Java代码模型NT-Java-1.1B及其量化版本,填补了这一研究空白。该模型在MultiPL-E Java代码基准测试中表现与约1.1B参数的开源模型相当,使其成为桌面部署的理想选择。本文为跨语言和规模的NT模型系列专用模型奠定了基础。