Traditional metrics like BLEU and BERTScore fail to capture semantic fidelity in generative text-to-text tasks. We adapt the Cross-Examination Framework (CEF) for a reference-free, multi-dimensional evaluation by treating the source and candidate as independent knowledge bases. CEF generates verifiable questions from each text and performs a cross-examination to derive three interpretable scores: Coverage, Conformity, and Consistency. Validated across translation, summarization and clinical note-generation, our framework identifies critical errors, such as content omissions and factual contradictions, missed by standard metrics. A key contribution is a systematic robustness analysis to select a stable judge model. Crucially, the strong correlation between our reference-free and with-reference modes validates CEF's reliability without gold references. Furthermore, human expert validation demonstrates that CEF mismatching questions align with meaning-altering semantic errors higher than with non-semantic errors, particularly excelling at identifying entity-based and relational distortions.
翻译:传统指标如BLEU和BERTScore难以捕捉生成式文本到文本任务中的语义保真度。我们通过将源文本与候选文本视为独立知识库,将交叉审问框架(CEF)改造为一种无需参考的多维评估方法。CEF从每个文本生成可验证的问题并进行交叉审问,从而得出三个可解释的分数:覆盖率、符合度与一致性。通过在翻译、摘要生成和临床记录生成任务上的验证,本框架能够识别标准指标遗漏的关键错误,例如内容遗漏和事实矛盾。一项关键贡献是进行了系统性鲁棒性分析以选择稳定的评判模型。重要的是,我们的无参考模式与有参考模式之间的强相关性验证了CEF在不依赖黄金参考时的可靠性。此外,人类专家验证表明,CEF不匹配问题与语义改变型错误的关联度显著高于非语义错误,尤其在识别基于实体的关系扭曲方面表现优异。