Perfect machine translation (MT) would render cross-lingual transfer (XLT) by means of multilingual language models (mLMs) superfluous. Given, on the one hand, the large body of work on improving XLT with mLMs and, on the other hand, recent advances in massively multilingual MT, in this work, we systematically evaluate existing and propose new translation-based XLT approaches for transfer to low-resource languages. We show that all translation-based approaches dramatically outperform zero-shot XLT with mLMs -- with the combination of round-trip translation of the source-language training data and the translation of the target-language test instances at inference -- being generally the most effective. We next show that one can obtain further empirical gains by adding reliable translations to other high-resource languages to the training data. Moreover, we propose an effective translation-based XLT strategy even for languages not supported by the MT system. Finally, we show that model selection for XLT based on target-language validation data obtained with MT outperforms model selection based on the source-language data. We believe our findings warrant a broader inclusion of more robust translation-based baselines in XLT research.
翻译:完美的机器翻译(MT)将使通过多语言语言模型(mLMs)进行的跨语言迁移(XLT)变得多余。一方面,鉴于大量关于利用mLMs改进XLT的研究工作,另一方面,考虑到大规模多语言MT的最新进展,在本工作中,我们系统地评估了现有的并提出了新的基于翻译的XLT方法,用于向低资源语言的迁移。我们表明,所有基于翻译的方法都显著优于使用mLMs的零样本XLT——其中,在推理时结合源语言训练数据的往返翻译和目标语言测试实例的翻译——通常是最有效的。接下来我们证明,通过向训练数据中添加其他高资源语言的可靠翻译,可以获得进一步的实证收益。此外,我们提出了一种有效的基于翻译的XLT策略,即使对于MT系统不支持的语言也适用。最后,我们表明,基于通过MT获得的目标语言验证数据进行XLT模型选择,优于基于源语言数据的模型选择。我们相信,我们的发现支持在XLT研究中更广泛地纳入更稳健的基于翻译的基线方法。