We present SnakModel, a Danish large language model (LLM) based on Llama2-7B, which we continuously pre-train on 13.6B Danish words, and further tune on 3.7M Danish instructions. As best practices for creating LLMs for smaller language communities have yet to be established, we examine the effects of early modeling and training decisions on downstream performance throughout the entire training pipeline, including (1) the creation of a strictly curated corpus of Danish text from diverse sources; (2) the language modeling and instruction-tuning training process itself, including the analysis of intermediate training dynamics, and ablations across different hyperparameters; (3) an evaluation on eight language and culturally-specific tasks. Across these experiments SnakModel achieves the highest overall performance, outperforming multiple contemporary Llama2-7B-based models. By making SnakModel, the majority of our pre-training corpus, and the associated code available under open licenses, we hope to foster further research and development in Danish Natural Language Processing, and establish training guidelines for languages with similar resource constraints.
翻译:本文介绍了SnakModel,一个基于Llama2-7B架构的丹麦语大语言模型。我们使用136亿丹麦语词汇进行持续预训练,并进一步通过370万条丹麦语指令进行微调。由于针对小语种社区构建大语言模型的最佳实践尚未确立,我们系统考察了早期建模与训练决策对下游性能的影响,涵盖完整训练流程的以下环节:(1) 从多元数据源构建严格筛选的丹麦语文本语料库;(2) 语言建模与指令微调的训练过程,包括对中间训练动态的分析以及不同超参数的消融实验;(3) 在八项语言与文化特异性任务上的评估。实验表明,SnakModel在综合评估中取得最优性能,超越了多个基于Llama2-7B的同期模型。通过以开源许可形式发布SnakModel、大部分预训练语料及相关代码,我们希望推动丹麦自然语言处理领域的进一步发展,并为具有类似资源限制的语言建立训练指导规范。