Clinical artificial intelligence (AI) methods have been proposed for predicting social behaviors which could be reasonably understood from patient-reported data. This raises ethical concerns about respect, privacy, and patient awareness/control over how their health data is used. Ethical concerns surrounding clinical AI systems for social behavior verification were divided into three main categories: (1) the use of patient data retrospectively without informed consent for the specific task of verification, (2) the potential for inaccuracies or biases within such systems, and (3) the impact on trust in patient-provider relationships with the introduction of automated AI systems for fact-checking. Additionally, this report showed the simulated misuse of a verification system and identified a potential LLM bias against patient-reported information in favor of multimodal data, published literature, and the outputs of other AI methods (i.e., AI self-trust). Finally, recommendations were presented for mitigating the risk that AI verification systems will cause harm to patients or undermine the purpose of the healthcare system.
翻译:临床人工智能方法已被提出用于预测社会行为,这些行为可以从患者报告的数据中合理推断。这引发了关于尊重、隐私以及患者对其健康数据使用方式的知情/控制的伦理关切。围绕用于社会行为验证的临床人工智能系统的伦理关切主要分为三类:(1) 在未经患者针对验证这一具体任务知情同意的情况下,回溯性使用患者数据;(2) 此类系统存在不准确或偏见的可能性;(3) 引入用于事实核查的自动化人工智能系统对医患关系中的信任所产生的影响。此外,本报告展示了一个验证系统的模拟滥用案例,并识别出大型语言模型可能存在的一种偏见:相较于患者报告的信息,模型更倾向于相信多模态数据、已发表文献以及其他人工智能方法的输出(即AI自我信任)。最后,报告提出了若干建议,以减轻人工智能验证系统可能对患者造成伤害或损害医疗保健系统宗旨的风险。