Explainable AI (XAI) holds the promise of advancing the implementation and adoption of AI-based tools in practice, especially in high-stakes environments like healthcare. However, most of the current research is disconnected from its practical applications and lacks input of end users. To address this, we conducted semi-structured interviews with clinicians to discuss their thoughts, hopes, and concerns. We find that clinicians generally think positively about developing AI-based tools for clinical practice, but they have concerns about how these will fit into their workflow and how it will impact clinician-patient relations. We further identify education of clinicians on AI as a crucial factor for the success of AI in healthcare and highlight aspects clinicians are looking for in (X)AI-based tools. In contrast to other studies, we take on a holistic and exploratory perspective to identify general requirements, which is necessary before moving on to testing specific (X)AI products for healthcare.
翻译:可解释人工智能(XAI)有望推动基于人工智能的工具在实践中的应用与采纳,尤其在医疗健康等高风险领域。然而,当前大多数研究与实践应用脱节,且缺乏终端用户的参与。为此,我们通过对临床医师进行半结构化访谈,探讨了他们的想法、期望与关切。研究发现,临床医师普遍对开发用于临床实践的基于人工智能的工具持积极态度,但担忧这些工具如何融入现有工作流程以及其对医患关系的影响。我们进一步指出,对临床医师进行人工智能教育是医疗人工智能成功的关键因素,并明确了临床医师对(可解释)人工智能工具的期望特征。与其他研究不同,本研究采用整体性与探索性视角,旨在识别通用需求,这是在转向测试具体医疗(可解释)人工智能产品之前必需的基础工作。