Automatic post-editing (APE) aims to refine machine translations by correcting residual errors. Although recent large language models (LLMs) demonstrate strong translation capabilities, their effectiveness for APE--especially under document-level context--remains insufficiently understood. We present a systematic comparison of proprietary and open-weight LLMs under a naive document-level prompting setup, analyzing APE quality, contextual behavior, robustness, and efficiency. Our results show that proprietary LLMs achieve near human-level APE quality even with simple one-shot prompting, regardless of whether document context is provided. While these models exhibit higher robustness to data poisoning attacks than open-weight counterparts, this robustness also reveals a limitation: they largely fail to exploit document-level context for contextual error correction. Furthermore, standard automatic metrics do not reliably reflect these qualitative improvements, highlighting the continued necessity of human evaluation. Despite their strong performance, the substantial cost and latency overheads of proprietary LLMs render them impractical for real-world APE deployment. Overall, our findings elucidate both the promise and current limitations of LLM-based document-aware APE, and point toward the need for more efficient long-context modeling approaches for translation refinement.
翻译:自动译后编辑(APE)旨在通过修正残留错误来优化机器翻译结果。尽管近期的大型语言模型(LLMs)展现出强大的翻译能力,但它们在APE任务中的有效性——尤其是在文档级上下文条件下——仍未得到充分理解。本研究在朴素的文档级提示设置下,系统比较了专有模型与开源权重LLMs在APE质量、上下文行为、鲁棒性和效率方面的表现。实验结果表明,专有LLMs即使仅使用简单的单样本提示,也能达到接近人类水平的APE质量,且无论是否提供文档上下文。虽然这些模型相比开源权重模型对数据投毒攻击表现出更高的鲁棒性,但这种鲁棒性也揭示了一个局限:它们大多未能有效利用文档级上下文进行语境化错误修正。此外,标准自动评估指标无法可靠反映这些质性改进,凸显了人工评估的持续必要性。尽管专有LLMs性能强劲,但其高昂的计算成本与延迟开销使其难以在实际APE部署中应用。总体而言,我们的研究结果阐明了基于LLM的文档感知APE技术的潜力与当前局限,并指出需要开发更高效的长上下文建模方法以推动翻译优化领域发展。