Large language models (LLMs) are increasingly used for emotional support and mental health-related interactions outside clinical settings, yet little is known about how people evaluate and relate to these systems in everyday use. We analyze 5,126 Reddit posts from 47 mental health communities describing experiential or exploratory use of AI for emotional support or therapy. Grounded in the Technology Acceptance Model and therapeutic alliance theory, we develop a theory-informed annotation framework and apply a hybrid LLM-human pipeline to analyze evaluative language, adoption-related attitudes, and relational alignment at scale. Our results show that engagement is shaped primarily by narrated outcomes, trust, and response quality, rather than emotional bond alone. Positive sentiment is most strongly associated with task and goal alignment, while companionship-oriented use more often involves misaligned alliances and reported risks such as dependence and symptom escalation. Overall, this work demonstrates how theory-grounded constructs can be operationalized in large-scale discourse analysis and highlights the importance of studying how users interpret language technologies in sensitive, real-world contexts.
翻译:大型语言模型(LLM)在临床环境之外越来越多地被用于情感支持和心理健康相关互动,然而人们对日常使用中如何评估和关联这些系统的了解仍然有限。本研究分析了来自47个心理健康社区的5,126篇Reddit帖子,这些帖子描述了将AI用于情感支持或治疗的体验性或探索性使用。基于技术接受模型和治疗联盟理论,我们开发了一个理论驱动的标注框架,并采用LLM-人工混合流程大规模分析评价性语言、采纳相关态度和关系对齐。研究结果表明,用户参与度主要受叙事结果、信任和响应质量的影响,而非仅由情感纽带决定。积极情绪与任务和目标对齐具有最强关联,而以陪伴为导向的使用更常涉及错位的联盟关系及报告的风险,如依赖性和症状加剧。总体而言,本研究展示了如何将理论建构应用于大规模话语分析,并强调了研究用户在敏感现实场景中如何阐释语言技术的重要性。