We introduce CroissantLLM, a 1.3B language model pretrained on a set of 3T English and French tokens, to bring to the research and industrial community a high-performance, fully open-sourced bilingual model that runs swiftly on consumer-grade local hardware. To that end, we pioneer the approach of training an intrinsically bilingual model with a 1:1 English-to-French pretraining data ratio, a custom tokenizer, and bilingual finetuning datasets. We release the training dataset, notably containing a French split with manually curated, high-quality, and varied data sources. To assess performance outside of English, we craft a novel benchmark, FrenchBench, consisting of an array of classification and generation tasks, covering various orthogonal aspects of model performance in the French Language. Additionally, rooted in transparency and to foster further Large Language Model research, we release codebases, and dozens of checkpoints across various model sizes, training data distributions, and training steps, as well as fine-tuned Chat models, and strong translation models. We evaluate our model through the FMTI framework, and validate 81 % of the transparency criteria, far beyond the scores of even most open initiatives. This work enriches the NLP landscape, breaking away from previous English-centric work in order to strengthen our understanding of multilinguality in language models.
翻译:我们提出CroissantLLM,这是一个基于3T英文和法文token预训练的13亿参数语言模型,旨在为研究和工业界提供一款高性能、完全开源、可在消费级本地硬件上流畅运行的双语模型。为此,我们开创性地采用1:1英法预训练数据比训练内禀双语模型,并配备定制分词器及双语微调数据集。我们公开了训练数据集,其中包含经人工筛选的高质量、多样化法语数据源。为评估非英语场景下的性能,我们创新性地构建了FrenchBench基准,涵盖分类与生成任务阵列,从多个正交维度衡量模型在法语中的表现。此外,本着透明原则并推动大语言模型研究,我们开放了代码库、不同模型规模/训练数据分布/训练步数的数十个检查点、微调对话模型及强翻译模型。通过FMTI框架评估,我们的模型满足81%的透明度标准,远优于多数开源项目。这项工作丰富了自然语言处理领域,突破以往以英语为中心的研究范式,深化了对语言模型多语特性的理解。