It is challenging to generate high-quality instruction datasets for non-English languages due to tail phenomena, which limit performance on less frequently observed data. To mitigate this issue, we propose translating existing high-quality English instruction datasets as a solution, emphasizing the need for complete and instruction-aware translations to maintain the inherent attributes of these datasets. We claim that fine-tuning LLMs with datasets translated in this way can improve their performance in the target language. To this end, we introduces a new translation framework tailored for instruction datasets, named InstaTrans (INSTruction-Aware TRANSlation). Through extensive experiments, we demonstrate the superiority of InstaTrans over other competitors in terms of completeness and instruction-awareness of translation, highlighting its potential to broaden the accessibility of LLMs across diverse languages at a relatively low cost. Furthermore, we have validated that fine-tuning LLMs with datasets translated by InstaTrans can effectively improve their performance in the target language.
翻译:由于长尾现象限制了模型在较少观测数据上的性能,为非英语语言生成高质量的指令数据集具有挑战性。为缓解此问题,我们提出将现有高质量英文指令数据集进行翻译作为一种解决方案,并强调需要完整且指令感知的翻译以保持这些数据集的内在属性。我们主张,使用以此方式翻译的数据集对大型语言模型进行微调,可以提升其在目标语言上的性能。为此,我们提出了一种专为指令数据集定制的新翻译框架,命名为InstaTrans。通过大量实验,我们证明了InstaTrans在翻译的完整性和指令感知性方面优于其他竞争对手,突显了其以相对较低成本拓宽大型语言模型在多种语言中可访问性的潜力。此外,我们已验证,使用由InstaTrans翻译的数据集对大型语言模型进行微调,能有效提升其在目标语言上的性能。