Maximum-a-posteriori (MAP) decoding is the most widely used decoding strategy for neural machine translation (NMT) models. The underlying assumption is that model probability correlates well with human judgment, with better translations getting assigned a higher score by the model. However, research has shown that this assumption does not always hold, and generation quality can be improved by decoding to optimize a utility function backed by a metric or quality-estimation signal, as is done by Minimum Bayes Risk (MBR) or quality-aware decoding. The main disadvantage of these approaches is that they require an additional model to calculate the utility function during decoding, significantly increasing the computational cost. In this paper, we propose to make the NMT models themselves quality-aware by training them to estimate the quality of their own output. Using this approach for MBR decoding we can drastically reduce the size of the candidate list, resulting in a speed-up of two-orders of magnitude. When applying our method to MAP decoding we obtain quality gains similar or even superior to quality reranking approaches, but with the efficiency of single pass decoding.
翻译:最大后验概率解码是神经机器翻译模型最广泛使用的解码策略。其基本假设是模型概率与人工评判具有良好相关性,即更好的译文会被模型赋予更高的分数。然而研究表明这一假设并非总是成立,通过解码优化基于度量或质量估计信号的效用函数(如最小贝叶斯风险解码或质量感知解码)可以提升生成质量。这些方法的主要缺点在于解码时需要额外模型计算效用函数,显著增加了计算成本。本文提出通过训练NMT模型使其能够评估自身输出的质量,从而使模型本身具备质量感知能力。将此方法应用于MBR解码时,我们能够大幅缩减候选列表规模,实现两个数量级的加速。将本方法应用于MAP解码时,我们获得了与质量重排序方法相当甚至更优的质量提升,同时保持了单次解码的效率。