Sailor2 is a family of cutting-edge multilingual language models for South-East Asian (SEA) languages, available in 1B, 8B, and 20B sizes to suit diverse applications. Building on Qwen2.5, Sailor2 undergoes continuous pre-training on 500B tokens (400B SEA-specific and 100B replay tokens) to support 13 SEA languages while retaining proficiency in Chinese and English. Sailor2-20B model achieves a 50-50 win rate against GPT-4o across SEA languages. We also deliver a comprehensive cookbook on how to develop the multilingual model in an efficient manner, including five key aspects: data curation, pre-training, post-training, model customization and evaluation. We hope that Sailor2 model (Apache 2.0 license) will drive language development in the SEA region, and Sailor2 cookbook will inspire researchers to build more inclusive LLMs for other under-served languages.
翻译:Sailor2是一系列面向东南亚语言的尖端多语言大语言模型,提供1B、8B和20B三种规模以适应不同应用需求。该模型基于Qwen2.5架构,通过持续预训练5000亿词元(其中4000亿为东南亚语言专用词元,1000亿为回放词元),实现了对13种东南亚语言的支持,同时保持中文与英文能力。Sailor2-20B模型在东南亚语言任务中与GPT-4o的对抗胜率达到50-50。我们还提供了涵盖五大关键环节(数据治理、预训练、后训练、模型定制与评估)的高效多语言模型开发全流程指南。我们期待采用Apache 2.0许可协议的Sailor2模型能推动东南亚地区的语言技术发展,其配套指南将激励研究者为其他资源匮乏语言构建更具包容性的大语言模型。