Despite major advances in artificial intelligence (AI) for medicine and healthcare, the deployment and adoption of AI technologies remain limited in real-world clinical practice. In recent years, concerns have been raised about the technical, clinical, ethical and legal risks associated with medical AI. To increase real world adoption, it is essential that medical AI tools are trusted and accepted by patients, clinicians, health organisations and authorities. This work describes the FUTURE-AI guideline as the first international consensus framework for guiding the development and deployment of trustworthy AI tools in healthcare. The FUTURE-AI consortium was founded in 2021 and currently comprises 118 inter-disciplinary experts from 51 countries representing all continents, including AI scientists, clinicians, ethicists, and social scientists. Over a two-year period, the consortium defined guiding principles and best practices for trustworthy AI through an iterative process comprising an in-depth literature review, a modified Delphi survey, and online consensus meetings. The FUTURE-AI framework was established based on 6 guiding principles for trustworthy AI in healthcare, i.e. Fairness, Universality, Traceability, Usability, Robustness and Explainability. Through consensus, a set of 28 best practices were defined, addressing technical, clinical, legal and socio-ethical dimensions. The recommendations cover the entire lifecycle of medical AI, from design, development and validation to regulation, deployment, and monitoring. FUTURE-AI is a risk-informed, assumption-free guideline which provides a structured approach for constructing medical AI tools that will be trusted, deployed and adopted in real-world practice. Researchers are encouraged to take the recommendations into account in proof-of-concept stages to facilitate future translation towards clinical practice of medical AI.
翻译:尽管人工智能(AI)在医学与医疗保健领域取得了重大进展,但AI技术在真实世界临床实践中的部署与应用仍然有限。近年来,与医疗AI相关的技术、临床、伦理及法律风险已引发广泛关注。为提升真实世界应用率,医疗AI工具必须获得患者、临床医生、医疗机构及监管部门的信任与接纳。本研究阐述了FUTURE-AI指南——这是首个指导医疗领域可信赖AI工具开发与部署的国际共识框架。FUTURE-AI联盟成立于2021年,目前由来自五大洲51个国家的118名跨学科专家组成,涵盖AI科学家、临床医生、伦理学家和社会科学家。在两年时间内,联盟通过包含深度文献综述、改良德尔菲调查和在线共识会议的迭代流程,确立了可信赖AI的指导原则与最佳实践。FUTURE-AI框架基于医疗AI可信赖性的六项指导原则建立,即公平性、普适性、可追溯性、可用性、鲁棒性与可解释性。通过共识会议,制定了一套包含28项最佳实践的方案,涵盖技术、临床、法律及社会伦理维度。相关建议覆盖医疗AI全生命周期,包括设计、开发、验证、监管、部署与监测。FUTURE-AI是一份风险知情、无预设假设的指南,为构建能在真实世界实践中获得信任、部署和应用医疗AI工具提供了结构化路径。我们鼓励研究者在概念验证阶段参考这些建议,以促进医疗AI未来向临床实践的转化。