LLM-based voice assistants (VAs) increasingly support older adults aging in place, yet how an assistant's agreeableness shapes explanation perception remains underexplored. We conducted a study(N=70) examining how VA agreeableness influences older adults' perceptions of explanations across routine and emergency home scenarios. High-agreeableness assistants were perceived as more trustworthy, empathetic, and likable, but these benefits diminished in emergencies where clarity outweighed warmth. Agreeableness did not affect perceived intelligence, suggesting social tone and competence are separable dimensions. Real-time environmental explanations outperformed history-based ones, and agreeable older adults penalized low-agreeableness assistants more strongly. These findings show the need to move beyond a one-size-fits-all approach to AI explainability, while balancing personality, context, and audience.
翻译:基于大语言模型(LLM)的语音助手(VA)日益支持老年人居家养老,但助手的宜人性如何影响解释感知仍缺乏深入研究。我们开展了一项研究(N=70),探讨语音助手的宜人性如何影响老年人在日常家庭场景和紧急家庭场景中对解释的感知。高宜人性助手被认为更值得信赖、更具同理心且更讨人喜欢,但这些优势在紧急情况下减弱——此时清晰度的重要性超过了亲和力。宜人性不影响感知智能度,表明社交语气与能力是可分离的维度。实时环境解释优于基于历史记录的解释,且高宜人性老年人对低宜人性助手的负面评价更为强烈。这些发现表明,需要超越"一刀切"的人工智能可解释性方法,同时平衡个性特征、使用情境与受众差异。