Background Advancements in Large Language Models (LLMs) hold transformative potential in healthcare, however, recent work has raised concern about the tendency of these models to produce outputs that display racial or gender biases. Although training data is a likely source of such biases, exploration of disease and demographic associations in text data at scale has been limited. Methods We conducted a large-scale textual analysis using a dataset comprising diverse web sources, including Arxiv, Wikipedia, and Common Crawl. The study analyzed the context in which various diseases are discussed alongside markers of race and gender. Given that LLMs are pre-trained on similar datasets, this approach allowed us to examine the potential biases that LLMs may learn and internalize. We compared these findings with actual demographic disease prevalence as well as GPT-4 outputs in order to evaluate the extent of bias representation. Results Our findings indicate that demographic terms are disproportionately associated with specific disease concepts in online texts. gender terms are prominently associated with disease concepts, while racial terms are much less frequently associated. We find widespread disparities in the associations of specific racial and gender terms with the 18 diseases analyzed. Most prominently, we see an overall significant overrepresentation of Black race mentions in comparison to population proportions. Conclusions Our results highlight the need for critical examination and transparent reporting of biases in LLM pretraining datasets. Our study suggests the need to develop mitigation strategies to counteract the influence of biased training data in LLMs, particularly in sensitive domains such as healthcare.
翻译:背景:大型语言模型(LLMs)的进步在医疗领域具有变革潜力,但近期研究对这些模型输出中表现出的种族或性别偏见倾向提出担忧。尽管训练数据是此类偏见的可能来源,但在文本数据中大规模探索疾病与人口统计学关联的研究仍十分有限。方法:我们利用包含Arxiv、维基百科和Common Crawl等多种网络来源的数据集开展大规模文本分析。本研究分析了各种疾病与种族、性别标记同时出现的语境。鉴于LLMs预训练于类似数据集,该方法使我们能够检查LLMs可能学习并内化的潜在偏见。我们将这些发现与真实人口疾病患病率及GPT-4输出结果进行比较,以评估偏见的表征程度。结果:我们的研究结果表明,在线文本中人口统计学术语与特定疾病概念存在不成比例的关联。性别术语与疾病概念关联显著,而种族术语的关联则少得多。在分析的18种疾病中,特定种族与性别术语的关联存在广泛差异。最显著的是,相较于人口比例,黑人种族的提及频率普遍过高。结论:我们的结果强调了在LLM预训练数据集中批判性审视偏见并进行透明报告的必要性。本研究建议,特别是医疗等敏感领域,亟需制定缓解策略以抵消偏见训练数据对LLM的影响。