Large language models (LLMs) have made significant strides in various natural language processing (NLP) tasks. Recent research shows that the moderately-sized LLMs often outperform their larger counterparts after task-specific fine-tuning. In this work, we delve into the process of adapting LLMs to specialize in document-level machine translation (DocMT) for a specific language pair. Firstly, we explore how prompt strategies affect downstream translation performance. Then, we conduct extensive experiments with two fine-tuning methods, three LLM backbones, and 18 translation tasks across nine language pairs. Our findings indicate that in some cases, these specialized models even surpass GPT-4 in translation performance, while they still significantly suffer from the off-target translation issue in others, even if they are exclusively fine-tuned on bilingual parallel documents. Furthermore, we provide an in-depth analysis of these LLMs tailored for DocMT, exploring aspects such as translation errors, discourse phenomena, training strategy, the scaling law of parallel documents, additional evaluation on recent test sets, and zero-shot crosslingual transfer. Our findings not only shed light on the strengths and limitations of LLM-based DocMT models but also provide a foundation for future research.
翻译:大型语言模型(LLMs)在各种自然语言处理(NLP)任务中取得了显著进展。近期研究表明,经过特定任务微调的中等规模LLMs往往优于规模更大的同类模型。本研究深入探讨了如何将LLMs适配至特定语言对的文档级机器翻译(DocMT)。首先,我们探索了提示策略对下游翻译性能的影响。随后,我们基于两种微调方法、三种LLM骨干架构以及涵盖九个语言对的18项翻译任务开展了广泛实验。实验结果表明,在某些场景下这些专用模型甚至超越GPT-4的翻译性能,但在其他场景中,即使仅使用双语平行文档进行微调,其仍严重受困于偏离目标翻译的问题。此外,我们针对适配DocMT的LLMs进行了深度分析,涵盖翻译错误、语篇现象、训练策略、平行文档的扩展规律、最新测试集的补充评估以及零样本跨语言迁移等维度。本研究的发现不仅揭示了基于LLM的DocMT模型的能力与局限,也为未来研究奠定了基础。