Generative AI systems are increasingly used by patients seeking everyday health guidance, yet their appropriateness in chronic care contexts remains unclear. Focusing on Type 2 Diabetes Mellitus (T2DM), this paper presents a mixed-methods investigation into how AI-generated health information is interpreted by patients and evaluated by physicians in China. Drawing on formative patient grounding and a dimension-based physician evaluation, we examine AI responses along five quality dimensions: Accuracy, Safety, Clarity, Integrity, and Action Orientation. Our findings reveal that while current systems perform well in factual explanation and general lifestyle guidance, they frequently break down in safety signaling, contextual judgment, and responsibility boundaries, particularly when fluent responses invite overtrust. By treating quality dimensions as an interpretive lens rather than a fixed framework, this work highlights the need for intelligent user interfaces that actively mediate AI outputs in chronic disease management, supporting calibrated trust and responsible boundary-setting in long-term care.
翻译:生成式AI系统正日益被患者用于寻求日常健康指导,但其在慢性病护理场景中的适用性仍不明确。本文聚焦于2型糖尿病,通过混合研究方法,探究中国患者如何解读AI生成的健康信息以及医生如何评估这些信息。基于患者需求调研和维度化医生评估体系,我们从五个质量维度检验AI回答:准确性、安全性、清晰性、完整性和行动导向性。研究发现,当前系统在事实解释和通用生活方式指导方面表现良好,但在安全警示、情境判断和责任边界方面存在明显不足,尤其是当流畅的回应引发过度信任时。通过将质量维度视为解释性视角而非固定框架,本研究强调在慢性病管理中需要能主动调节AI输出的智能用户界面,以支持长期护理中的校准信任与责任边界设定。