Attention is the core mechanism of today's most used architectures for natural language processing and has been analyzed from many perspectives, including its effectiveness for machine translation-related tasks. Among these studies, attention resulted to be a useful source of information to get insights about word alignment also when the input text is substituted with audio segments, as in the case of the speech translation (ST) task. In this paper, we propose AlignAtt, a novel policy for simultaneous ST (SimulST) that exploits the attention information to generate source-target alignments that guide the model during inference. Through experiments on the 8 language pairs of MuST-C v1.0, we show that AlignAtt outperforms previous state-of-the-art SimulST policies applied to offline-trained models with gains in terms of BLEU of 2 points and latency reductions ranging from 0.5s to 0.8s across the 8 languages.
翻译:注意力机制是当前自然语言处理领域最常用架构的核心机制,已从多个角度(包括其在机器翻译相关任务中的有效性)得到深入分析。在这些研究中,即使输入文本被音频片段替代(如语音翻译任务的情况),注意力机制仍被证明是获取词对齐信息的有效来源。本文提出AlignAtt——一种创新的同步语音翻译策略,该策略利用注意力信息生成源-目标对齐关系,在推理过程中引导模型。通过在MuST-C v1.0数据集的8个语言对上进行的实验表明,AlignAtt在应用于离线训练模型时,其性能优于现有最先进的同步语音翻译策略:在8种语言中平均获得2个BLEU值的提升,同时延迟降低0.5秒至0.8秒。