Word-level AutoCompletion(WLAC) is a rewarding yet challenging task in Computer-aided Translation. Existing work addresses this task through a classification model based on a neural network that maps the hidden vector of the input context into its corresponding label (i.e., the candidate target word is treated as a label). Since the context hidden vector itself does not take the label into account and it is projected to the label through a linear classifier, the model can not sufficiently leverage valuable information from the source sentence as verified in our experiments, which eventually hinders its overall performance. To alleviate this issue, this work proposes an energy-based model for WLAC, which enables the context hidden vector to capture crucial information from the source sentence. Unfortunately, training and inference suffer from efficiency and effectiveness challenges, thereby we employ three simple yet effective strategies to put our model into practice. Experiments on four standard benchmarks demonstrate that our reranking-based approach achieves substantial improvements (about 6.07%) over the previous state-of-the-art model. Further analyses show that each strategy of our approach contributes to the final performance.
翻译:词级自动补全(WLAC)是计算机辅助翻译中一项具有价值但极具挑战性的任务。现有研究通过基于神经网络的分类模型处理该任务,该模型将输入上下文的隐藏向量映射至其对应标签(即候选目标词被视为标签)。由于上下文隐藏向量本身未考虑标签信息,且通过线性分类器投影至标签,模型无法充分利用源语句中有价值的信息(正如我们实验所验证),最终限制了其整体性能。为缓解此问题,本研究提出一种基于能量的WLAC模型,使上下文隐藏向量能够捕捉源语句的关键信息。然而,训练与推断过程面临效率与效能的挑战,为此我们采用三种简洁而有效的策略以实现模型的实际应用。在四个标准基准测试上的实验表明,我们基于重排序的方法相较于先前最优模型取得了显著提升(约6.07%)。进一步分析显示,本方法中的每种策略均对最终性能有所贡献。