We release Gaperon, a fully open suite of French-English-coding language models designed to advance transparency and reproducibility in large-scale model training. The Gaperon family includes 1.5B, 8B, and 24B parameter models trained on 2-4 trillion tokens, released with all elements of the training pipeline: French and English datasets filtered with a neural quality classifier, an efficient data curation and training framework, and hundreds of intermediate checkpoints. Through this work, we study how data filtering and contamination interact to shape both benchmark and generative performance. We find that filtering for linguistic quality enhances text fluency and coherence but yields subpar benchmark results, and that late deliberate contamination -- continuing training on data mixes that include test sets -- recovers competitive scores while only reasonably harming generation quality. We discuss how usual neural filtering can unintentionally amplify benchmark leakage. To support further research, we also introduce harmless data poisoning during pretraining, providing a realistic testbed for safety studies. By openly releasing all models, datasets, code, and checkpoints, Gaperon establishes a reproducible foundation for exploring the trade-offs between data curation, evaluation, safety, and openness in multilingual language model development.
翻译:我们发布了Gaperon,一套完全开源的英法-代码语言模型套件,旨在提升大规模模型训练的透明度和可复现性。Gaperon系列包含15亿、80亿和240亿参数模型,基于2-4万亿词元训练,并完整公开训练流程的所有要素:通过神经质量分类器筛选的法语和英语数据集、高效的数据整理与训练框架,以及数百个中间检查点。通过这项工作,我们研究了数据过滤与污染如何共同影响基准测试和生成性能。研究发现,针对语言质量的过滤能提升文本流畅度和连贯性,但会导致基准测试结果欠佳;而后期有意的污染——在包含测试集的数据混合上继续训练——能恢复具有竞争力的评分,同时仅对生成质量产生合理范围内的损害。我们探讨了常规神经过滤如何无意间加剧基准测试泄露。为支持进一步研究,我们在预训练阶段引入了无害的数据投毒,为安全性研究提供了真实的测试平台。通过公开所有模型、数据集、代码和检查点,Gaperon为探索多语言语言模型开发中数据整理、评估、安全性与开放性之间的权衡关系建立了可复现的基础。