Discharge summaries in Electronic Health Records (EHRs) are crucial for clinical decision-making, but their length and complexity make information extraction challenging, especially when dealing with accumulated summaries across multiple patient admissions. Large Language Models (LLMs) show promise in addressing this challenge by efficiently analyzing vast and complex data. Existing benchmarks, however, fall short in properly evaluating LLMs' capabilities in this context, as they typically focus on single-note information or limited topics, failing to reflect the real-world inquiries required by clinicians. To bridge this gap, we introduce EHRNoteQA, a novel benchmark built on the MIMIC-IV EHR, comprising 962 different QA pairs each linked to distinct patients' discharge summaries. Every QA pair is initially generated using GPT-4 and then manually reviewed and refined by three clinicians to ensure clinical relevance. EHRNoteQA includes questions that require information across multiple discharge summaries and covers eight diverse topics, mirroring the complexity and diversity of real clinical inquiries. We offer EHRNoteQA in two formats: open-ended and multi-choice question answering, and propose a reliable evaluation method for each. We evaluate 27 LLMs using EHRNoteQA and examine various factors affecting the model performance (e.g., the length and number of discharge summaries). Furthermore, to validate EHRNoteQA as a reliable proxy for expert evaluations in clinical practice, we measure the correlation between the LLM performance on EHRNoteQA, and the LLM performance manually evaluated by clinicians. Results show that LLM performance on EHRNoteQA have higher correlation with clinician-evaluated performance (Spearman: 0.78, Kendall: 0.62) compared to other benchmarks, demonstrating its practical relevance in evaluating LLMs in clinical settings.
翻译:电子健康记录(EHR)中的出院小结对临床决策至关重要,但其长度和复杂性使得信息提取具有挑战性,尤其是在处理患者多次入院累积的小结时。大语言模型(LLMs)通过高效分析海量复杂数据,展现出应对这一挑战的潜力。然而,现有基准在评估LLMs在此背景下的能力方面存在不足,它们通常关注单份记录信息或有限主题,未能反映临床医生所需的真实世界查询。为弥补这一差距,我们提出了EHRNoteQA,这是一个基于MIMIC-IV EHR构建的新型基准,包含962个不同的问答对,每个问答对均与不同患者的出院小结相关联。每个问答对最初由GPT-4生成,随后由三位临床医生手动审查和精炼,以确保临床相关性。EHRNoteQA包含需要跨多份出院小结信息的问题,涵盖八个不同主题,反映了真实临床查询的复杂性和多样性。我们以两种格式提供EHRNoteQA:开放式问答和多项选择问答,并为每种格式提出了可靠的评估方法。我们使用EHRNoteQA评估了27个LLMs,并考察了影响模型性能的各种因素(例如出院小结的长度和数量)。此外,为验证EHRNoteQA作为临床实践中专家评估的可靠代理,我们测量了LLMs在EHRNoteQA上的性能与临床医生手动评估的LLM性能之间的相关性。结果表明,与其他基准相比,LLMs在EHRNoteQA上的性能与临床医生评估的性能具有更高的相关性(Spearman:0.78,Kendall:0.62),证明了其在临床环境中评估LLMs的实践相关性。