Current evaluation of German automatic text simplification (ATS) relies on general-purpose metrics such as SARI, BLEU, and BERTScore, which insufficiently capture simplification quality in terms of simplicity, meaning preservation, and fluency. While specialized metrics like LENS have been developed for English, corresponding efforts for German have lagged behind due to the absence of human-annotated corpora. To close this gap, we introduce DETECT, the first German-specific metric that holistically evaluates ATS quality across all three dimensions of simplicity, meaning preservation, and fluency, and is trained entirely on synthetic large language model (LLM) responses. Our approach adapts the LENS framework to German and extends it with (i) a pipeline for generating synthetic quality scores via LLMs, enabling dataset creation without human annotation, and (ii) an LLM-based refinement step for aligning grading criteria with simplification requirements. To the best of our knowledge, we also construct the largest German human evaluation dataset for text simplification to validate our metric directly. Experimental results show that DETECT achieves substantially higher correlations with human judgments than widely used ATS metrics, with particularly strong gains in meaning preservation and fluency. Beyond ATS, our findings highlight both the potential and the limitations of LLMs for automatic evaluation and provide transferable guidelines for general language accessibility tasks.
翻译:当前德语自动文本简化(ATS)的评估依赖于SARI、BLEU和BERTScore等通用指标,这些指标在衡量简化质量(包括简洁性、意义保持性和流畅性)方面存在不足。虽然针对英语已开发出LENS等专用指标,但由于缺乏人工标注语料库,德语领域的相应研究进展滞后。为填补这一空白,我们提出了DETECT——首个针对德语的综合性评估指标,该指标全面评估ATS在简洁性、意义保持性和流畅性三个维度的质量,并完全基于合成大语言模型(LLM)响应进行训练。我们的方法将LENS框架适配至德语,并通过以下方式加以扩展:(i)构建基于LLM生成合成质量分数的流程,实现无需人工标注的数据集创建;(ii)采用基于LLM的优化步骤,使评分标准与简化需求对齐。据我们所知,我们还构建了目前规模最大的德语文本简化人工评估数据集,用于直接验证所提指标的有效性。实验结果表明,与广泛使用的ATS指标相比,DETECT与人工评估结果的相关性显著更高,尤其在意义保持性和流畅性维度提升明显。除ATS领域外,我们的研究既揭示了大语言模型在自动评估中的潜力,也指出了其局限性,并为通用语言可及性任务提供了可迁移的指导原则。