While there has been significant development of models for Plain Language Summarization (PLS), evaluation remains a challenge. PLS lacks a dedicated assessment metric, and the suitability of text generation evaluation metrics is unclear due to the unique transformations involved (e.g., adding background explanations, removing jargon). To address these questions, our study introduces a granular meta-evaluation testbed, APPLS, designed to evaluate metrics for PLS. We identify four PLS criteria from previous work -- informativeness, simplification, coherence, and faithfulness -- and define a set of perturbations corresponding to these criteria that sensitive metrics should be able to detect. We apply these perturbations to extractive hypotheses for two PLS datasets to form our testbed. Using APPLS, we assess performance of 14 metrics, including automated scores, lexical features, and LLM prompt-based evaluations. Our analysis reveals that while some current metrics show sensitivity to specific criteria, no single method captures all four criteria simultaneously. We therefore recommend a suite of automated metrics be used to capture PLS quality along all relevant criteria. This work contributes the first meta-evaluation testbed for PLS and a comprehensive evaluation of existing metrics. APPLS and our evaluation code is available at https://github.com/LinguisticAnomalies/APPLS.
翻译:尽管简明语言摘要(PLS)模型已取得显著进展,但其评估仍面临挑战。PLS缺乏专用的评估指标,且由于涉及独特的文本转换(例如添加背景解释、去除专业术语),现有文本生成评估指标的适用性尚不明确。为应对这些问题,本研究提出了一个细粒度的元评估测试基准APPLS,专门用于评测PLS评估指标。我们从已有研究中归纳出PLS的四项评价标准——信息性、简化性、连贯性与忠实性——并定义了一组与这些标准对应的文本扰动,灵敏的评估指标应能有效检测这些扰动。我们将这些扰动应用于两个PLS数据集的抽取式摘要假设,构建了本测试基准。利用APPLS,我们评估了14种指标的性能,包括自动化评分、词汇特征以及基于大语言模型提示的评估方法。分析表明,虽然现有部分指标对特定标准表现出敏感性,但尚无单一方法能同时涵盖全部四项标准。因此我们建议采用组合式自动化指标来全面捕获PLS在各项相关标准上的质量。本研究贡献了首个PLS元评估测试基准,并对现有指标进行了全面评估。APPLS及评估代码已发布于https://github.com/LinguisticAnomalies/APPLS。