In document-level neural machine translation (DocNMT), multi-encoder approaches are common in encoding context and source sentences. Recent studies \cite{li-etal-2020-multi-encoder} have shown that the context encoder generates noise and makes the model robust to the choice of context. This paper further investigates this observation by explicitly modelling context encoding through multi-task learning (MTL) to make the model sensitive to the choice of context. We conduct experiments on cascade MTL architecture, which consists of one encoder and two decoders. Generation of the source from the context is considered an auxiliary task, and generation of the target from the source is the main task. We experimented with German--English language pairs on News, TED, and Europarl corpora. Evaluation results show that the proposed MTL approach performs better than concatenation-based and multi-encoder DocNMT models in low-resource settings and is sensitive to the choice of context. However, we observe that the MTL models are failing to generate the source from the context. These observations align with the previous studies, and this might suggest that the available document-level parallel corpora are not context-aware, and a robust sentence-level model can outperform the context-aware models.
翻译:在文档级神经机器翻译(DocNMT)中,多编码器方法常用于编码上下文和源语句。近期研究\cite{li-etal-2020-multi-encoder}表明,上下文编码器会产生噪声,并使模型对上下文的选择具有鲁棒性。本文通过多任务学习(MTL)显式建模上下文编码,进一步探究这一现象,旨在使模型对上下文选择变得敏感。我们采用级联MTL架构进行实验,该架构包含一个编码器和两个解码器。将上下文生成源语句视为辅助任务,而将源语句生成目标语句作为主任务。我们在News、TED和Europarl语料库上进行了德语-英语语言对的实验。评估结果表明,在低资源场景下,所提出的MTL方法优于基于拼接和多编码器的DocNMT模型,且对上下文选择具有敏感性。然而,我们观察到MTL模型未能从上下文中有效生成源语句。这些发现与先前研究一致,这可能表明现有的文档级平行语料库缺乏上下文感知能力,而鲁棒的句子级模型可能超越上下文感知模型。