Large language models (LLMs) have significantly advanced natural language processing, but their progress has yet to be equal across languages. While most LLMs are trained in high-resource languages like English, multilingual models generally underperform monolingual ones. Additionally, aspects of their multilingual foundation sometimes restrict the byproducts they produce, like computational demands and licensing regimes. In this study, we document the development of open-foundation models tailored for use in low-resource settings, their limitations, and their benefits. This is the TeenyTinyLlama pair: two compact models for Brazilian Portuguese text generation. We release them under the permissive Apache 2.0 license on GitHub and Hugging Face for community use and further development. See https://github.com/Nkluge-correa/TeenyTinyLlama
翻译:大型语言模型(LLMs)显著推进了自然语言处理的发展,但其进展在不同语言间尚未均衡。尽管大多数LLMs训练于英语等高资源语言,多语言模型的表现通常逊于单语模型。此外,多语言基础架构的某些特性有时会限制其衍生成果,例如计算需求和许可制度。在本研究中,我们记录了面向低资源场景定制的开放基础模型的开发过程、其局限性及其优势。这是TeenyTinyLlama对:两个专用于巴西葡萄牙语文本生成的紧凑模型。我们在GitHub和Hugging Face上以宽松的Apache 2.0许可证发布它们,供社区使用和进一步开发。详见https://github.com/Nkluge-correa/TeenyTinyLlama