We evaluate recent Large Language Models (LLMs) on the challenging task of summarizing short stories, which can be lengthy, and include nuanced subtext or scrambled timelines. Importantly, we work directly with authors to ensure that the stories have not been shared online (and therefore are unseen by the models), and to obtain informed evaluations of summary quality using judgments from the authors themselves. Through quantitative and qualitative analysis grounded in narrative theory, we compare GPT-4, Claude-2.1, and LLama-2-70B. We find that all three models make faithfulness mistakes in over 50% of summaries and struggle with specificity and interpretation of difficult subtext. We additionally demonstrate that LLM ratings and other automatic metrics for summary quality do not correlate well with the quality ratings from the writers.
翻译:本研究针对大语言模型在短篇小说摘要生成这一挑战性任务中的表现进行评估。该任务具有以下难点:文本篇幅可能较长、包含细腻的潜文本信息以及非线性的时间叙事结构。本研究的关键创新在于:我们直接与原作者合作,确保所选故事未曾在网络公开(从而保证模型未接触过这些文本),并通过作者本人的专业判断获取对摘要质量的权威评估。基于叙事理论的定量与定性分析框架,我们系统比较了GPT-4、Claude-2.1和LLama-2-70B三种模型的表现。研究发现:所有模型在超过50%的摘要中均存在事实忠实性错误,且在细节特异性与复杂潜文本解读方面存在明显不足。进一步研究表明,大语言模型自动评分机制及其他自动摘要评估指标与作者给出的质量评级之间相关性较弱。