Low-resource languages pose persistent challenges for Natural Language Processing tasks such as lemmatization and part-of-speech (POS) tagging. This paper investigates the capacity of recent large language models (LLMs), including GPT-4 variants and open-weight Mistral models, to address these tasks in few-shot and zero-shot settings for four historically and linguistically diverse under-resourced languages: Ancient Greek, Classical Armenian, Old Georgian, and Syriac. Using a novel benchmark comprising aligned training and out-of-domain test corpora, we evaluate the performance of foundation models across lemmatization and POS-tagging, and compare them with PIE, a task-specific RNN baseline. Our results demonstrate that LLMs, even without fine-tuning, achieve competitive or superior performance in POS-tagging and lemmatization across most languages in few-shot settings. Significant challenges persist for languages characterized by complex morphology and non-Latin scripts, but we demonstrate that LLMs are a credible and relevant option for initiating linguistic annotation tasks in the absence of data, serving as an effective aid for annotation.
翻译:低资源语言在词形还原和词性标注等自然语言处理任务中持续面临挑战。本文研究了包括GPT-4变体和开源权重的Mistral模型在内的大型语言模型,在少样本和零样本设置下处理四种历史与语言多样性显著的低资源语言——古希腊语、古典亚美尼亚语、古格鲁吉亚语和叙利亚语——相关任务的能力。通过构建包含对齐训练集与跨领域测试集的新型基准,我们评估了基础模型在词形还原和词性标注任务上的表现,并将其与任务特定的RNN基线模型PIE进行对比。实验结果表明,即使未经微调,LLM在少样本设置下对多数语言的词性标注和词形还原任务均能达到竞争性乃至更优的性能。对于形态复杂且使用非拉丁文字的语言仍存在显著挑战,但我们证明LLM在缺乏数据的情况下可作为启动语言标注任务的可靠且有效的选择,成为标注工作的有力辅助工具。