Large language models (LLMs) are known to effectively perform tasks by simply observing few exemplars. However, in low-resource languages, obtaining such hand-picked exemplars can still be challenging, where unsupervised techniques may be necessary. Moreover, competent generative capabilities of LLMs are observed only in high-resource languages, while their performances among under-represented languages fall behind due to pre-training data imbalance. To elicit LLMs' ability onto low-resource languages without any supervised data, we propose to assemble synthetic exemplars from a diverse set of high-resource languages to prompt the LLMs to translate from any language into English. These prompts are then used to create intra-lingual exemplars to perform tasks in the target languages. Our unsupervised prompting method performs on par with supervised few-shot learning in LLMs of different sizes for translations between English and 13 Indic and 21 African low-resource languages. We also show that fine-tuning a 7B model on data generated from our method helps it perform competitively with a 175B model. In non-English translation tasks, our method even outperforms supervised prompting by up to 3 chrF++ in many low-resource languages. When evaluated on zero-shot multilingual summarization, our method surpasses other English-pivoting baselines by up to 4 ROUGE-L and is also favored by GPT-4.
翻译:大语言模型(LLMs)仅需观察少量示例即可有效执行任务。然而在低资源语言中,获取此类人工筛选的示例仍具挑战性,此时无监督技术可能成为必要手段。此外,LLMs强大的生成能力仅在高资源语言中得到体现,由于预训练数据不平衡,其在欠表征语言中的表现明显滞后。为在无监督数据条件下激发LLMs处理低资源语言的能力,我们提出通过整合多源高资源语言的合成示例,构建提示以引导LLMs将任意语言翻译为英语。这些提示随后用于创建目标语言内的跨任务示例。我们的无监督提示方法在不同规模的LLMs中,针对英语与13种印度语系及21种非洲低资源语言间的翻译任务,表现与有监督少样本学习相当。实验还表明,基于本方法生成数据对70亿参数模型进行微调后,其性能可与1750亿参数模型竞争。在非英语翻译任务中,本方法在多项低资源语言上的表现甚至超越有监督提示方法,最高提升3个chrF++分数。在零样本多语言摘要任务评估中,本方法较其他英语中枢基线最高提升4个ROUGE-L分数,并获得GPT-4的优选评价。