Error Span Detection (ESD) is a crucial subtask in Machine Translation (MT) evaluation, aiming to identify the location and severity of translation errors. While fine-tuning models on human-annotated data improves ESD performance, acquiring such data is expensive and prone to inconsistencies among annotators. To address this, we propose a novel self-evolution framework based on Minimum Bayes Risk (MBR) decoding, named Iterative MBR Distillation for ESD, which eliminates the reliance on human annotations by leveraging an off-the-shelf LLM to generate pseudo-labels.Extensive experiments on the WMT Metrics Shared Task datasets demonstrate that models trained solely on these self-generated pseudo-labels outperform both unadapted base model and supervised baselines trained on human annotations at the system and span levels, while maintaining competitive sentence-level performance.
翻译:错误片段检测是机器翻译评估的关键子任务,旨在识别翻译错误的位置与严重程度。尽管基于人工标注数据微调模型可提升错误片段检测性能,但此类数据获取成本高昂且易受标注者间不一致性影响。为此,我们提出一种基于最小贝叶斯风险解码的新型自进化框架——迭代最小贝叶斯风险蒸馏错误片段检测方法,通过利用现成的大语言模型生成伪标签,彻底摆脱对人工标注的依赖。在WMT度量共享任务数据集上的大量实验表明,仅基于自生成伪标签训练的模型在系统级和片段级评估中均优于未适应的基础模型及基于人工标注训练的监督基线,同时保持具有竞争力的句子级性能。