We introduce Xmodel-1.5, a 1-billion-parameter multilingual large language model pretrained on 2 trillion tokens, designed for balanced performance and scalability. Unlike most large models that use the BPE tokenizer, Xmodel-1.5 employs a custom unigram tokenizer with 65,280 tokens, optimizing both efficiency and accuracy. The model delivers competitive results across multiple languages, including Thai, Arabic, French, Chinese, and English, outperforming Alibaba's PolyLM-1.7B on respective evaluation datasets. Xmodel-1.5 excels in benchmarks like mMMLU and PIQA, and achieves state-of-the-art results in Thai. To support low-resource language research, we release Xdata_Thai, a Thai-specific evaluation dataset featuring unique linguistic challenges such as gendered particles and idioms. While the model demonstrates strong performance, there is still room for improvement in handling culturally specific nuances. We hope this work contributes to advancements in multilingual AI research. Models and code are publicly available on GitHub at https://github.com/XiaoduoAILab/XmodelLM-1.5
翻译:我们介绍了Xmodel-1.5,这是一个拥有10亿参数、基于2万亿token进行预训练的多语言大语言模型,旨在平衡性能与可扩展性。与大多数使用BPE分词器的大模型不同,Xmodel-1.5采用了一个包含65,280个token的自定义unigram分词器,从而优化了效率和准确性。该模型在包括泰语、阿拉伯语、法语、中文和英语在内的多种语言上均取得了有竞争力的结果,在各自的评估数据集上超越了阿里巴巴的PolyLM-1.7B。Xmodel-1.5在mMMLU和PIQA等基准测试中表现出色,并在泰语上取得了最先进的结果。为了支持低资源语言研究,我们发布了Xdata_Thai,这是一个针对泰语的评估数据集,包含了诸如性别化助词和习语等独特的语言挑战。虽然该模型展现出强大的性能,但在处理特定文化细微差别方面仍有改进空间。我们希望这项工作能为多语言人工智能研究的进步做出贡献。模型和代码已在GitHub上公开,地址为 https://github.com/XiaoduoAILab/XmodelLM-1.5。