Recent advances in Large Reasoning Models (LRMs), particularly those leveraging Chain-of-Thought reasoning (CoT), have opened brand new possibility for Machine Translation (MT). This position paper argues that LRMs substantially transformed traditional neural MT as well as LLMs-based MT paradigms by reframing translation as a dynamic reasoning task that requires contextual, cultural, and linguistic understanding and reasoning. We identify three foundational shifts: 1) contextual coherence, where LRMs resolve ambiguities and preserve discourse structure through explicit reasoning over cross-sentence and complex context or even lack of context; 2) cultural intentionality, enabling models to adapt outputs by inferring speaker intent, audience expectations, and socio-linguistic norms; 3) self-reflection, LRMs can perform self-reflection during the inference time to correct the potential errors in translation especially extremely noisy cases, showing better robustness compared to simply mapping X->Y translation. We explore various scenarios in translation including stylized translation, document-level translation and multimodal translation by showcasing empirical examples that demonstrate the superiority of LRMs in translation. We also identify several interesting phenomenons for LRMs for MT including auto-pivot translation as well as the critical challenges such as over-localisation in translation and inference efficiency. In conclusion, we think that LRMs redefine translation systems not merely as text converters but as multilingual cognitive agents capable of reasoning about meaning beyond the text. This paradigm shift reminds us to think of problems in translation beyond traditional translation scenarios in a much broader context with LRMs - what we can achieve on top of it.
翻译:近年来,大型推理模型(LRMs)的进展,尤其是那些利用思维链推理(CoT)的模型,为机器翻译(MT)开辟了全新的可能性。本立场论文认为,LRMs通过将翻译重新定义为一项需要语境、文化和语言理解与推理的动态推理任务,从根本上改变了传统的神经机器翻译以及基于大语言模型(LLMs)的机器翻译范式。我们识别出三个根本性转变:1)语境连贯性:LRMs通过显式推理跨句子和复杂语境(甚至缺乏语境的情况),解决歧义并保持语篇结构;2)文化意图性:使模型能够通过推断说话者意图、受众期望和社会语言规范来调整输出;3)自反思能力:LRMs能在推理过程中进行自我反思,以纠正翻译中的潜在错误,尤其是在极端嘈杂的情况下,相比简单的X->Y映射式翻译展现出更好的鲁棒性。我们通过展示实证案例,探讨了LRMs在风格化翻译、文档级翻译和多模态翻译等多种翻译场景中的优越性。我们还识别了LRMs在机器翻译中的若干有趣现象,例如自动枢轴翻译,以及关键挑战,如翻译中的过度本地化和推理效率问题。总之,我们认为LRMs将翻译系统重新定义为不仅仅是文本转换器,而是能够对文本之外的意义进行推理的多语言认知智能体。这一范式转变提醒我们,在LRMs的广阔背景下,思考超越传统翻译场景的问题——我们能在其基础上实现什么。