Translating major language resources to build minor language resources becomes a widely-used approach. Particularly in translating complex data points composed of multiple components, it is common to translate each component separately. However, we argue that this practice often overlooks the interrelation between components within the same data point. To address this limitation, we propose a novel MT pipeline that considers the intra-data relation in implementing MT for training data. In our MT pipeline, all the components in a data point are concatenated to form a single translation sequence and subsequently reconstructed to the data components after translation. We introduce a Catalyst Statement (CS) to enhance the intra-data relation, and Indicator Token (IT) to assist the decomposition of a translated sequence into its respective data components. Through our approach, we have achieved a considerable improvement in translation quality itself, along with its effectiveness as training data. Compared with the conventional approach that translates each data component separately, our method yields better training data that enhances the performance of the trained model by 2.690 points for the web page ranking (WPR) task, and 0.845 for the question generation (QG) task in the XGLUE benchmark.
翻译:将主要语言资源翻译以构建次要语言资源已成为一种广泛应用的方法。特别是在翻译由多个组件构成的复杂数据点时,通常的做法是分别翻译每个组件。然而,我们认为这种做法往往忽略了同一数据点内各组件之间的相互关系。为应对这一局限,我们提出了一种新颖的机器翻译流程,该流程在执行训练数据机器翻译时考虑了数据内部关系。在我们的机器翻译流程中,数据点的所有组件被拼接成一个单一的翻译序列,随后在翻译后重构为数据组件。我们引入了催化剂陈述以增强数据内部关系,并使用指示符标记来辅助将翻译后的序列分解为相应的数据组件。通过我们的方法,我们在翻译质量本身及其作为训练数据的有效性方面均取得了显著提升。与分别翻译每个数据组件的传统方法相比,我们的方法能生成更优质的训练数据,使得在XGLUE基准测试中,经过训练的模型在网页排序任务上的性能提升了2.690分,在问题生成任务上提升了0.845分。