Reinforcement learning (RL) has shown strong promise for LLM-based machine translation, with recent methods such as GRPO demonstrating notable gains; nevertheless, translation-oriented RL remains challenged by noisy learning signals arising from Monte Carlo return estimation, as well as a large trajectory space that favors global exploration over fine-grained local optimization. We introduce \textbf{PEGRL}, a \textit{two-stage} RL framework that uses post-editing as an auxiliary task to stabilize training and guide overall optimization. At each iteration, translation outputs are sampled to construct post-editing inputs, allowing return estimation in the post-editing stage to benefit from conditioning on the current translation behavior, while jointly supporting both global exploration and fine-grained local optimization. A task-specific weighting scheme further balances the contributions of translation and post-editing objectives, yielding a biased yet more sample-efficient estimator. Experiments on English$\to$Finnish, English$\to$Turkish, and English$\leftrightarrow$Chinese show consistent gains over RL baselines, and for English$\to$Turkish, performance on COMET-KIWI is comparable to advanced LLM-based systems (DeepSeek-V3.2).
翻译:强化学习(RL)在基于大语言模型(LLM)的机器翻译中展现出巨大潜力,近期方法如GRPO已取得显著提升;然而,面向翻译的强化学习仍面临两大挑战:源于蒙特卡洛回报估计的噪声学习信号,以及倾向于全局探索而非细粒度局部优化的庞大轨迹空间。本文提出\textbf{PEGRL},一种\textit{两阶段}强化学习框架,利用译后编辑作为辅助任务以稳定训练并引导整体优化。在每次迭代中,对翻译输出进行采样以构建译后编辑输入,使得译后编辑阶段的回报估计能够受益于对当前翻译行为的条件化,同时协同支持全局探索与细粒度局部优化。任务特定的加权机制进一步平衡翻译目标与译后编辑目标的贡献,从而产生一个有偏但样本效率更高的估计器。在英语→芬兰语、英语→土耳其语和英语↔中文上的实验表明,本方法相对强化学习基线取得一致提升;在英语→土耳其语任务上,其COMET-KIWI指标性能与先进的基于LLM的系统(DeepSeek-V3.2)相当。