As opposed to evaluating computation and logic-based reasoning, current benchmarks for evaluating large language models (LLMs) in medicine are primarily focused on question-answering involving domain knowledge and descriptive reasoning. While such qualitative capabilities are vital to medical diagnosis, in real-world scenarios, doctors frequently use clinical calculators that follow quantitative equations and rule-based reasoning paradigms for evidence-based decision support. To this end, we propose MedCalc-Bench, a first-of-its-kind dataset focused on evaluating the medical calculation capability of LLMs. MedCalc-Bench contains an evaluation set of over 1000 manually reviewed instances from 55 different medical calculation tasks. Each instance in MedCalc-Bench consists of a patient note, a question requesting to compute a specific medical value, a ground truth answer, and a step-by-step explanation showing how the answer is obtained. While our evaluation results show the potential of LLMs in this area, none of them are effective enough for clinical settings. Common issues include extracting the incorrect entities, not using the correct equation or rules for a calculation task, or incorrectly performing the arithmetic for the computation. We hope our study highlights the quantitative knowledge and reasoning gaps in LLMs within medical settings, encouraging future improvements of LLMs for various clinical calculation tasks.
翻译:与评估计算和基于逻辑的推理不同,当前用于评估医学领域大语言模型(LLMs)的基准主要侧重于涉及领域知识和描述性推理的问答任务。虽然此类定性能力对医学诊断至关重要,但在现实场景中,医生经常使用遵循定量方程和基于规则的推理范式的临床计算器来支持循证决策。为此,我们提出了MedCalc-Bench,这是一个首创的数据集,专注于评估LLMs的医学计算能力。MedCalc-Bench包含一个评估集,涵盖来自55种不同医学计算任务的1000多个经过人工审核的实例。MedCalc-Bench中的每个实例包含一份患者病历、一个要求计算特定医学值的问题、一个真实答案,以及一个展示如何得出答案的逐步解释。虽然我们的评估结果显示了LLMs在该领域的潜力,但尚无模型在临床环境中足够有效。常见问题包括提取错误的实体、未使用正确的方程或规则进行计算任务,或在计算过程中错误地执行算术运算。我们希望我们的研究能凸显LLMs在医学环境中的定量知识和推理差距,从而推动LLMs在各种临床计算任务上的未来改进。