For many low-resource languages, the only available language models are large multilingual models trained on many languages simultaneously. However, using FLORES perplexity as a metric, we find that these models perform worse than bigrams for many languages (e.g. 24% of languages in XGLM 4.5B; 43% in BLOOM 7.1B). To facilitate research that focuses on low-resource languages, we pre-train and release Goldfish, a suite of monolingual autoregressive Transformer language models up to 125M parameters for 350 languages. The Goldfish reach lower FLORES perplexities than BLOOM, XGLM, and MaLA-500 on 98 of 204 FLORES languages, despite each Goldfish model being over 10x smaller. However, the Goldfish significantly underperform larger multilingual models on reasoning benchmarks, suggesting that for low-resource languages, multilinguality primarily improves general reasoning abilities rather than basic text generation. We release models trained on 5MB (350 languages), 10MB (288 languages), 100MB (166 languages), and 1GB (83 languages) of text data where available. The Goldfish models are available as baselines, fine-tuning sources, or augmentations to existing models in low-resource NLP research, and they are further useful for crosslinguistic studies requiring maximally comparable models across languages.
翻译:对于许多低资源语言,唯一可用的语言模型是在多种语言上同时训练的大型多语言模型。然而,以FLORES困惑度作为评估指标,我们发现这些模型在许多语言上的表现甚至不及二元语法模型(例如XGLM 4.5B中24%的语言;BLOOM 7.1B中43%的语言)。为促进针对低资源语言的研究,我们预训练并发布了Goldfish——一套面向350种语言、参数量最高达1.25亿的单语自回归Transformer语言模型。在204种FLORES语言中的98种上,Goldfish实现了比BLOOM、XGLM和MaLA-500更低的FLORES困惑度,尽管每个Goldfish模型的规模比它们小10倍以上。然而,在推理基准测试中,Goldfish显著逊色于更大的多语言模型,这表明对于低资源语言而言,多语言性主要提升的是通用推理能力而非基础文本生成能力。我们发布了基于5MB(350种语言)、10MB(288种语言)、100MB(166种语言)和1GB(83种语言)文本数据(在数据可得情况下)训练的模型。Goldfish模型可作为低资源自然语言处理研究中的基线模型、微调源或现有模型的增强工具,同时对于需要跨语言最大可比性的语言类型学研究也具有重要价值。