Recent breakthroughs in large language models (LLMs) have generated both interest and concern about their potential adoption as accessible information sources or communication tools across different domains. In public health -- where stakes are high and impacts extend across populations -- adopting LLMs poses unique challenges that require thorough evaluation. However, structured approaches for assessing potential risks in public health remain under-explored. To address this gap, we conducted focus groups with health professionals and health issue experiencers to unpack their concerns, situated across three distinct and critical public health issues that demand high-quality information: vaccines, opioid use disorder, and intimate partner violence. We synthesize participants' perspectives into a risk taxonomy, distinguishing and contextualizing the potential harms LLMs may introduce when positioned alongside traditional health communication. This taxonomy highlights four dimensions of risk in individual behaviors, human-centered care, information ecosystem, and technology accountability. For each dimension, we discuss specific risks and example reflection questions to help practitioners adopt a risk-reflexive approach. This work offers a shared vocabulary and reflection tool for experts in both computing and public health to collaboratively anticipate, evaluate, and mitigate risks in deciding when to employ LLM capabilities (or not) and how to mitigate harm when they are used.
翻译:近期大语言模型(LLM)的突破性进展,引发了对其作为跨领域可及信息源或沟通工具的潜在应用前景的兴趣与担忧。在公共卫生领域——其风险极高且影响遍及全体人群——采用LLM带来了独特的挑战,需要全面评估。然而,用于评估公共卫生领域潜在风险的结构化方法仍待深入探索。为填补这一空白,我们与健康专业人士及健康议题亲历者开展了焦点小组讨论,深入剖析了他们在三个要求高质量信息且至关重要的公共卫生议题上的关切:疫苗、阿片类药物使用障碍和亲密伴侣暴力。我们将参与者的观点整合成一个风险分类体系,区分并情境化地阐释了当LLM与传统健康传播方式并存时可能引入的潜在危害。该分类体系突出了个体行为、以人为本的照护、信息生态系统和技术问责四个维度的风险。针对每个维度,我们讨论了具体风险并提供了示例反思问题,以帮助从业者采用风险反思性方法。这项工作为计算与公共卫生领域的专家提供了一套共享术语和反思工具,使其能协作预见、评估并缓解在决定何时(或是否)采用LLM能力以及在使用时如何减轻危害的过程中所面临的风险。