Artificial intelligence (AI) is increasingly used to support prognosis in Alzheimer's disease (AD), but adoption remains limited due to a lack of transparency and interpretability, particularly for long-term predictions where uncertainty is intrinsic and outcomes may not be known for years. We position uncertainty visualization as an explainable AI (XAI) technique and examine how it shapes trust, confidence, and reliance when users interpret AI-generated forecasts of future cognitive decline transitions. We conducted two studies, one with general participants (N=37) and one with experts in neuroimaging and neurology (N=10), to compare binary (present/absent) and continuous (saturation) uncertainty encodings. Continuous encodings improved perceived reliability and helped users recognize model limitations, while binary encodings increased momentary confidence, revealing expertise-dependent trade-offs in interpreting future predictions under high uncertainty. These findings surface key challenges in designing uncertainty representations for prognostic AI and culminate in a set of empirically grounded guidelines for creating trustworthy, user-appropriate clinical decision support tools.
翻译:人工智能(AI)在阿尔茨海默病(AD)预后评估中的应用日益增多,但由于缺乏透明度和可解释性,其临床采纳仍受限制——这在不确定性内生于预测过程、且结果可能多年后才能验证的长期预测中尤为突出。本研究将不确定性可视化定位为一种可解释人工智能(XAI)技术,探讨当用户解读AI生成的未来认知衰退转变预测时,该技术如何影响其信任度、置信度及决策依赖性。我们开展了两项研究:一项面向普通参与者(N=37),另一项面向神经影像学与神经病学专家(N=10),以比较二元(存在/缺失)与连续(饱和度)两种不确定性编码方式。研究发现,连续编码提升了感知可靠性并帮助用户认识模型局限,而二元编码则增强了瞬时置信度,揭示了在高不确定性环境下解读未来预测时存在依赖专业知识的权衡关系。这些发现揭示了为预后性AI设计不确定性表征的核心挑战,并最终形成了一套基于实证的指导原则,用于构建可信赖且适应用户需求的临床决策支持工具。