Recent evaluations of LLMs on coreference resolution have revealed that traditional output formats and evaluation metrics do not fully capture the models' referential understanding. To address this, we introduce IdentifyMe, a new benchmark for mention resolution presented in a multiple-choice question (MCQ) format, commonly used for evaluating LLMs. IdentifyMe features long narratives and employs heuristics to exclude easily identifiable mentions, creating a more challenging task. The benchmark also consists of a curated mixture of different mention types and corresponding entities, allowing for a fine-grained analysis of model performance. We evaluate both closed- and open source LLMs on IdentifyMe and observe a significant performance gap (20-30%) between the state-of-the-art sub-10B open models vs. closed ones. We observe that pronominal mentions, which have limited surface information, are typically much harder for models to resolve than nominal mentions. Additionally, we find that LLMs often confuse entities when their mentions overlap in nested structures. The highest-scoring model, GPT-4o, achieves 81.9% accuracy, highlighting the strong referential capabilities of state-of-the-art LLMs while also indicating room for further improvement.
翻译:近期对大型语言模型在指代消解任务上的评估表明,传统的输出格式和评价指标未能完全捕捉模型对指代关系的理解能力。为此,我们提出了IdentifyMe,这是一个以多项选择题形式呈现的指代消解新基准,该格式常用于评估大型语言模型。IdentifyMe采用长篇叙事文本,并运用启发式方法排除易于识别的指称,从而构建出更具挑战性的任务。该基准还包含精心筛选的不同指称类型及其对应实体的混合数据,支持对模型性能进行细粒度分析。我们在IdentifyMe上评估了闭源和开源大型语言模型,发现最先进的10B参数以下开源模型与闭源模型之间存在显著的性能差距(20-30%)。我们观察到,表面信息有限的代词性指称通常比名词性指称更难被模型消解。此外,我们发现当实体指称在嵌套结构中重叠时,大型语言模型经常混淆这些实体。得分最高的GPT-4o模型达到了81.9%的准确率,这既体现了最先进大型语言模型强大的指代理解能力,也表明其仍有进一步改进的空间。