Large language models (LLMs) have shown promise in safety-critical applications such as healthcare, yet the ability to quantify performance has lagged. An example of this challenge is in evaluating a summary of the patient's medical record. A resulting summary can enable the provider to get a high-level overview of the patient's health status quickly. Yet, a summary that omits important facts about the patient's record can produce a misleading picture. This can lead to negative consequences on medical decision-making. We propose MED-OMIT as a metric to explore this challenge. We focus on using provider-patient history conversations to generate a subjective (a summary of the patient's history) as a case study. We begin by discretizing facts from the dialogue and identifying which are omitted from the subjective. To determine which facts are clinically relevant, we measure the importance of each fact to a simulated differential diagnosis. We compare MED-OMIT's performance to that of clinical experts and find broad agreement We use MED-OMIT to evaluate LLM performance on subjective generation and find some LLMs (gpt-4 and llama-3.1-405b) work well with little effort, while others (e.g. Llama 2) perform worse.
翻译:大型语言模型(LLM)在医疗保健等安全关键型应用中展现出潜力,但量化其性能的能力却相对滞后。评估患者病历摘要即是这一挑战的例证。生成的摘要能使医疗提供者快速获取患者健康状况的高层概览。然而,若摘要遗漏了患者病历中的重要事实,则可能呈现误导性的图景,从而对医疗决策产生负面影响。为此,我们提出MED-OMIT指标以探究此问题。我们以利用医患历史对话生成主观部分(即患者病史摘要)作为案例研究。我们首先从对话中离散化提取事实,并识别哪些事实在主观部分中被遗漏。为确定哪些事实具有临床相关性,我们评估了每个事实对模拟鉴别诊断的重要性。我们将MED-OMIT的性能与临床专家的评估进行比较,发现两者具有广泛一致性。我们使用MED-OMIT评估LLM在生成主观部分时的表现,发现部分LLM(如gpt-4和llama-3.1-405b)无需过多调整即可表现良好,而其他模型(例如Llama 2)则表现较差。