Large language models, particularly multilingual ones, are designed, claimed, and expected to cater to native speakers of varied languages. We hypothesise that the current practices of fine-tuning and evaluating these models may mismatch this intention owing to a heavy reliance on translation, which can introduce translation artefacts and defects. It remains unknown whether the nature of the instruction data has an impact on the model output; on the other hand, it remains questionable whether translated test sets can capture such nuances. Due to the often coupled practices of using translated data in both stages, such imperfections could have been overlooked. This work investigates these issues by using controlled native or translated data during instruction tuning and evaluation stages and observing model results. Experiments on eight base models and eight different benchmarks reveal that native or generation benchmarks display a notable difference between native and translated instruction data especially when model performance is high, whereas other types of test sets cannot. Finally, we demonstrate that regularization is beneficial to bridging this gap on structured but not generative tasks.
翻译:大型语言模型,特别是多语言模型,其设计初衷、宣称目标与预期功能均旨在服务不同语言的母语使用者。我们假设当前对这些模型的微调与评估实践可能偏离此目标,原因在于对翻译数据的过度依赖——这可能引入翻译伪影与缺陷。目前尚不清楚指令数据的本质是否会影响模型输出;另一方面,翻译生成的测试集能否捕捉此类细微差异也值得商榷。由于两个阶段常同时使用翻译数据,此类缺陷可能被长期忽视。本研究通过在指令微调与评估阶段分别使用受控的母语数据或翻译数据,并观察模型表现来探究这些问题。基于八个基础模型与八个不同基准的实验表明:在模型性能较高时,母语或生成式基准能显著区分母语与翻译指令数据,而其他类型的测试集则无法做到。最后,我们证明正则化方法有助于在结构化任务(而非生成式任务)上弥合这一差距。