We evaluate recent Large Language Models (LLMs) on the challenging task of summarizing short stories, which can be lengthy, and include nuanced subtext or scrambled timelines. Importantly, we work directly with authors to ensure that the stories have not been shared online (and therefore are unseen by the models), and to obtain informed evaluations of summary quality using judgments from the authors themselves. Through quantitative and qualitative analysis grounded in narrative theory, we compare GPT-4, Claude-2.1, and LLama-2-70B. We find that all three models make faithfulness mistakes in over 50% of summaries and struggle with specificity and interpretation of difficult subtext. We additionally demonstrate that LLM ratings and other automatic metrics for summary quality do not correlate well with the quality ratings from the writers.
翻译:本研究针对大语言模型在短篇小说摘要生成这一挑战性任务中的表现进行评估。该任务具有以下难点:文本长度可能较长、包含微妙潜台词或非线性时间线。本研究的关键创新在于直接与作者合作,确保所选故事未在网络上公开(从而保证模型未接触过相关文本),并通过作者本人的专业判断获得对摘要质量的权威评估。基于叙事理论的定量与定性分析,我们比较了GPT-4、Claude-2.1和LLama-2-70B三种模型。研究发现:所有模型在超过50%的摘要中均出现事实性错误,且在细节呈现与复杂潜台词解读方面存在明显不足。此外,研究证实大语言模型的自动评分及其他自动摘要评估指标与作者给出的质量评价之间相关性较弱。