Recent advances in language modeling have demonstrated significant improvements in zero-shot capabilities, including in-context learning, instruction following, and machine translation for extremely under-resourced languages (Tanzer et al., 2024). However, many languages with limited written resources rely primarily on formal descriptions of grammar and vocabulary. In this paper, we introduce a set of benchmarks to evaluate how well models can extract and classify information from the complex descriptions found in linguistic grammars. We present a Retrieval-Augmented Generation (RAG)-based approach that leverages these descriptions for downstream tasks such as machine translation. Our benchmarks encompass linguistic descriptions for 248 languages across 142 language families, focusing on typological features from WALS and Grambank. This set of benchmarks offers the first comprehensive evaluation of language models' in-context ability to accurately interpret and extract linguistic features, providing a critical resource for scaling NLP to low-resource languages. The code and data are publicly available at \url{https://github.com/al-the-eigenvalue/RAG-on-grammars}.
翻译:语言建模的最新进展已显著提升了零样本能力,包括上下文学习、指令遵循以及对极度资源匮乏语言的机器翻译(Tanzer等人,2024)。然而,许多书面资源有限的语言主要依赖语法和词汇的形式化描述。本文引入了一套基准测试,用于评估模型从语言学语法中的复杂描述中提取和分类信息的能力。我们提出了一种基于检索增强生成的方法,该方法利用这些描述来完成下游任务,如机器翻译。我们的基准测试涵盖了142个语系中248种语言的语言学描述,重点关注来自WALS和Grambank的类型学特征。这套基准测试首次对语言模型在上下文中准确解释和提取语言学特征的能力进行了全面评估,为将自然语言处理技术扩展至低资源语言提供了关键资源。代码和数据已在\url{https://github.com/al-the-eigenvalue/RAG-on-grammars}公开。