Social robots and conversational agents are being explored as supports for wellbeing, goal-setting, and everyday self-regulation. While prior work highlights their potential to motivate and guide users, much of the evidence relies on self-reported outcomes or short, researcher-mediated encounters. As a result, we know little about the interaction dynamics that unfold when people use such systems in real-world contexts, and how these dynamics should shape future robot wellbeing coaches. This paper addresses this gap through content analysis of 4352 messages exchanged longitudinally between 38 university students and an LLM-based wellbeing coach. Our results provide a fine-grained view into how users naturally shape, steer, and sometimes struggle within supportive human-AI dialogue, revealing patterns of user-led direction, guidance-seeking, and emotional expression. We discuss how these dynamics can inform the design of robot wellbeing coaches that support user autonomy, provide appropriate scaffolding, and uphold ethical boundaries in sustained wellbeing interactions.
翻译:社交机器人与对话智能体正被探索作为心理健康支持、目标设定及日常自我调节的辅助工具。尽管先前研究强调了其在激励与引导用户方面的潜力,但多数证据依赖于自我报告结果或短暂的研究者介入式交互。因此,我们对人们在现实场景中使用此类系统时展开的交互动态知之甚少,也不清楚这些动态应如何塑造未来的机器人心理健康教练。本文通过对38名大学生与基于大语言模型的健康教练之间纵向交换的4352条信息进行内容分析,填补了这一研究空白。我们的结果精细揭示了用户如何在支持性人机对话中自然塑造、引导对话,并时而面临困境,展现了用户主导方向、寻求指导和情感表达等交互模式。我们进一步探讨了这些动态如何指导机器人心理健康教练的设计,使其能够在持续的心理健康互动中支持用户自主性、提供适切的支持框架并坚守伦理边界。