Large language models, particularly multilingual ones, are designed, claimed, and expected to cater to native speakers of varied languages. We hypothesise that the current practices of fine-tuning and evaluating these models may not perfectly align with this objective owing to a heavy reliance on translation, which can introduce translation artefacts and defects. It remains unknown whether the nature of the instruction data has an impact on the model output; conversely, it is questionable whether translated test sets can capture such nuances. Due to the often coupled practices of using translated data in both stages, such imperfections could have been overlooked. This work investigates these issues using controlled native or translated data during instruction tuning and evaluation stages. Experiments on eight base models and eight different benchmarks show that native or generation benchmarks reveal a notable difference between native and translated instruction data especially when model performance is high, whereas other types of test sets cannot. The comparison between round-trip and single-pass translations reflects the importance of knowledge from language-native resources. Finally, we demonstrate that regularization is beneficial to bridging this gap on structured but not generative tasks.
翻译:大型语言模型,特别是多语言模型,其设计目标、宣称功能及预期定位均旨在服务于不同语言的母语使用者。我们假设,由于当前对这些模型的微调和评估实践过度依赖翻译(这可能引入翻译伪影与缺陷),这些实践可能并未完全符合上述目标。指令数据的本质是否会影响模型输出仍不明确;反之,翻译生成的测试集能否捕捉此类细微差异也值得商榷。由于在这两个阶段中常同时使用翻译数据,此类缺陷可能被忽视。本研究通过在指令微调和评估阶段使用受控的母语或翻译数据来探究这些问题。在八个基础模型和八个不同基准测试上的实验表明:当模型性能较高时,母语或生成式基准测试能揭示母语与翻译指令数据间的显著差异,而其他类型的测试集则无法做到这一点。回译与单次翻译的比较反映了源自语言母语资源知识的重要性。最后,我们证明正则化方法有助于在结构化任务(而非生成式任务)上弥合这一差距。