In this thesis, I refine our understanding as to what conclusions we can reach from coreference-based evaluations by expanding existing evaluation practices and considering the extent to which evaluation results are either converging or conflicting. First, I analyze standard coreference evaluations and show that their design often leads to non-generalizable conclusions due to issues of measurement validity - including contestedness (multiple, competing definitions of coreference) and convergent validity (evaluation results that rank models differently across benchmarks). Second, I propose and implement a novel evaluation focused on testing systems' ability to infer the relative plausibility of events, a key aspect of resolving coreference. Through this extended evaluation, I find that contemporary language models demonstrate strong performance on standard benchmarks - improving over earlier baseline systems within certain domains and types of coreference - but remain sensitive to the evaluation conditions: they often fail to generalize in ways one would expect a human to be capable of when evaluation contexts are slightly modified. Taken together, these findings clarify both the strengths, such as improved accuracy over baselines on widely used evaluations, and the limitations of the current NLP paradigm, including weaknesses in measurement validity, and suggest directions for future work in developing better evaluation methods and more genuinely generalizable systems.
翻译:本论文通过拓展现有评估实践并考察评估结果的收敛与冲突程度,深化了我们对基于指代消解的评估所能得出结论的理解。首先,我分析了标准指代消解评估方法,发现其设计常因测量效度问题导致结论缺乏普适性——包括概念争议性(存在多个相互竞争的指代定义)和收敛效度不足(不同基准测试对模型的排序结果存在差异)。其次,我提出并实施了一种新颖的评估方法,重点关注系统推断事件相对合理性的能力,这是解决指代消解问题的关键维度。通过这项拓展评估发现,当代语言模型在标准基准测试中表现优异——在特定领域和指代类型上超越了早期基线系统——但仍对评估条件高度敏感:当评估语境发生细微变化时,这些模型往往无法像人类那样实现预期的泛化能力。综合来看,这些发现既阐明了当前自然语言处理范式的优势(如在广泛使用的评估中较基线系统精度提升),也揭示了其局限性(包括测量效度的缺陷),并为开发更优评估方法和真正具备泛化能力的系统指明了未来研究方向。