Large language models (LLMs) are increasingly applied to clinical decision-making. However, their potential to exhibit bias poses significant risks to clinical equity. Currently, there is a lack of benchmarks that systematically evaluate such clinical bias in LLMs. While in downstream tasks, some biases of LLMs can be avoided such as by instructing the model to answer "I'm not sure...", the internal bias hidden within the model still lacks deep studies. We introduce CLIMB (shorthand for A Benchmark of Clinical Bias in Large Language Models), a pioneering comprehensive benchmark to evaluate both intrinsic (within LLMs) and extrinsic (on downstream tasks) bias in LLMs for clinical decision tasks. Notably, for intrinsic bias, we introduce a novel metric, AssocMAD, to assess the disparities of LLMs across multiple demographic groups. Additionally, we leverage counterfactual intervention to evaluate extrinsic bias in a task of clinical diagnosis prediction. Our experiments across popular and medically adapted LLMs, particularly from the Mistral and LLaMA families, unveil prevalent behaviors with both intrinsic and extrinsic bias. This work underscores the critical need to mitigate clinical bias and sets a new standard for future evaluations of LLMs' clinical bias.
翻译:大型语言模型(LLMs)在临床决策中的应用日益广泛。然而,其可能表现出的偏见对临床公平性构成了重大风险。目前,尚缺乏系统评估LLMs此类临床偏见的基准测试。尽管在下游任务中,可以通过指令模型回答“我不确定...”等方式避免LLMs的某些偏见,但模型内部隐藏的偏见仍缺乏深入研究。我们提出了CLIMB(大型语言模型临床偏见基准测试的简称),这是一个开创性的综合性基准,用于评估LLMs在临床决策任务中的内在(模型内部)和外在(下游任务)偏见。值得注意的是,针对内在偏见,我们引入了一种新颖的度量标准AssocMAD,以评估LLMs在多个社会人口群体间的差异。此外,我们利用反事实干预来评估临床诊断预测任务中的外在偏见。我们对流行及医学适配的LLMs(特别是Mistral和LLaMA系列模型)进行的实验揭示了普遍存在的内在与外在偏见行为。这项工作强调了缓解临床偏见的迫切需求,并为未来评估LLMs的临床偏见设立了新标准。