Despite the remarkable success of LLMs in English, there is a significant gap in performance in non-English languages. In order to address this, we introduce a novel recipe for creating a multilingual synthetic instruction tuning dataset, sPhinX, which is created by selectively translating instruction response pairs from English into 50 languages. We test the effectiveness of sPhinx by using it to fine-tune two state-of-the-art models, Mistral-7B and Phi-Small and then evaluating them across a comprehensive suite of multilingual benchmarks that test reasoning, question answering, reading comprehension and machine translation. Our results show that Mistral-7B and Phi-Small fine-tuned with sPhinX perform better on an average by 5%pt for both the models when compared to the base variants of these models. We also devise a strategy to incorporate N-shot examples in each fine-tuning sample which further boosts the performance of these models by 9%pt and 4%pt respectively respectively compared to vanilla fine-tuning. To show efficacy of our data curation approach, we also directly translate our original dataset to the target languages, and observe an increase of 7%pt and 4%pt on both the models respectively. sPhinX outperforms other multilingual instruction tuning datasets in both efficiency and diversity, reducing dataset creation costs. It also maintains strong performance on standard English LLM benchmarks, with minimal regression.
翻译:尽管大型语言模型在英语领域取得了显著成功,但在非英语语言上的性能仍存在显著差距。为应对这一问题,我们提出了一种创新的多语言合成指令微调数据集构建方法sPhinX,该方法通过选择性地将英语指令-响应对翻译为50种语言来创建数据集。我们通过使用sPhinX对两个先进模型Mistral-7B和Phi-Small进行微调,并在涵盖推理、问答、阅读理解和机器翻译的综合多语言基准测试套件中进行评估,以验证其有效性。实验结果表明,经sPhinX微调的Mistral-7B和Phi-Small模型相较于基础版本平均性能提升5个百分点。我们还设计了一种在每条微调样本中融入N-shot示例的策略,该策略使两个模型相较于普通微调分别进一步提升9个百分点和4个百分点。为验证数据构建方法的有效性,我们将原始数据集直接翻译为目标语言进行对比实验,观察到两个模型分别获得7个百分点和4个百分点的性能提升。sPhinX在数据集创建效率与语言多样性方面均优于其他多语言指令微调数据集,同时显著降低了构建成本。该数据集在保持标准英语LLM基准测试强性能的同时,仅产生最小程度的性能回退。