There is a growing need to understand how digital systems can support clinical decision-making, particularly as artificial intelligence (AI) models become increasingly complex and less human-interpretable. This complexity raises concerns about trustworthiness, impacting safe and effective adoption of such technologies. Improved understanding of decision-making processes and requirements for explanations coming from decision support tools is a vital component in providing effective explainable solutions. This is particularly relevant in the data-intensive, fast-paced environments of intensive care units (ICUs). To explore these issues, group interviews were conducted with seven ICU clinicians, representing various roles and experience levels. Thematic analysis revealed three core themes: (T1) ICU decision-making relies on a wide range of factors, (T2) the complexity of patient state is challenging for shared decision-making, and (T3) requirements and capabilities of AI decision support systems. We include design recommendations from clinical input, providing insights to inform future AI systems for intensive care.
翻译:随着人工智能(AI)模型日益复杂且人类可解释性降低,理解数字系统如何支持临床决策的需求日益增长。这种复杂性引发了对可信度的担忧,影响了此类技术安全有效的采用。改进对决策支持工具提供的决策过程及解释需求的理解,是提供有效可解释解决方案的关键组成部分。这在数据密集、快节奏的重症监护病房(ICU)环境中尤为相关。为探讨这些问题,我们对七名代表不同角色和经验水平的ICU临床医生进行了小组访谈。主题分析揭示了三个核心主题:(T1)ICU决策依赖于广泛的因素,(T2)患者状态的复杂性对共享决策构成挑战,以及(T3)AI决策支持系统的需求与能力。我们整合了来自临床输入的设计建议,为未来重症监护AI系统的开发提供见解。